The Apple documentation for SessionGetInfo for swift mentions that this API takes third argument of type UnsafeMutablePointer<SessionAttributeBits>? but I m getting the below error when I pass an argument of this type.
Cannot convert value of type 'UnsafeMutablePointer<SessionAttributeBits>' to expected argument type 'UnsafeMutablePointer<UInt32>'
Why is it expecting a different type. The documentation states otherwise. How to resolve this? Is this a Bug?
public static func GetSessionInfo () -> Void
{
var sessionID = SecuritySessionId()
var sessionAttrs = SessionAttributeBits()
let status = SessionGetInfo(callerSecuritySession,
&sessionID,
&sessionAttrs) //error:Cannot convert value of type 'UnsafeMutablePointer<SessionAttributeBits>' to expected argument type 'UnsafeMutablePointer<UInt32>'
if status != errSessionSuccess {
print("Could not get session info. Error \(status)")
}
}
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have a project that has Cpp code and swift code and I m making some calls from swift to cpp. For this I m using the Cpp-swift interop mechanism introduced in swift 5.9 for making direct calls between swift and cpp.
I m using module.modulemap file to expose the cpp code to swift. I am facnig an error when I try to access a cpp header that are using the cpp compiler flags in some static assert statements. These methods are working fine in cpp however, when I try to access these methods from swift using the modulemap file, the compiler flags are not being identified by swift, and produces the below error
error: use of undeclared identifier 'TW_KERNEL_WINDOWS'.
This is my module.modulemap file:
module CoreModule {
header "ProcessStates.hpp" //cpp header that contains compiler flags
export *
}
below is the cpp header code that I m trying to access in swift:
#pragma once
// if the given condition is false – treat as an error
#define STATIC_CHECKFALSE(condition, message) static_assert (condition, message)
STATIC_CHECKFALSE ((TW_KERNEL_WINDOWS == 0) || (TW_KERNEL_WINDOWS == 1), "TW_KERNEL_WINDOWS can only be 0 or 1");
Can someone help, why is this happening and how to resolve this?
I have a bundled macOS application. This is a non-interactive application where I m performing some task on the worker thread while the main thread waits for this task to be completed. Sometimes this task can be time consuming.
I have observed that when I run the application using the bundle( like double click or open command) I can see the OS marking my application as not responding( this is evident as the app icon toggles in the dock and then it states not responding).
Although If I run the unix executable in the bundle, the app runs and I do not see the not responding status anywhere.
I wanted to understand If this is happening because my main thread is in a waiting state? If yes, what could I do to resolve it because my application logic demands the main thread to wait for the worker thread to complete its task. Is there some way to use some event loop like GCD?
Note: I cannot use the delegates(Appkit) event loop because my application will be run in non-GUI context.
in iOS, user can set focus on UItextField and tapping a key in the virtual keyboard updates the text in the textfield. This user action causes the relevant delegates of UITextFieldDelegate to get invoked, i.e the handlers associated with action of user entering some text in the textfield.
I m trying to simulate this user action where I am trying to do this programatically. I want to simulate it in a way such that all the handlers/listeners which otherwise would have been invoked as a result of user typing in the textfield should also get invoked now when i am trying to do it programatically. I have a specific usecase of this in my application.
Below is how I m performing this simulation.
I m manually updating the text field associated(UITextField.text) and updating its value.
And then I m invoking the delegate manually as textField.delegate?.textField?(textField, shouldChangeCharactersIn: nsRange, replacementString: replacementString)
I wanted to know If this is the right way to do this. Is there something better available that can be used, such that simulation has the same affect as the user performing the update?
I m trying to identify if my launched process is running on a local mac machine(desktop/laptop) or a virtual macOS X instance like AWS EC2, Azure, MacStadium etc.
I have followed this link which searched for its limited providers in the output, but I m not bound to any limited providers and looking for a general solution which is applicable to all the providers.
Is there some hardware/network/virtualization-related information that can be used to identify if the process is launched on a virtual MacOS instance?
OR is there some system Information that I can use to be sure that my process is running on a local machine?
I have a need to interoperate between cpp and swift. Here when we return a 'String' type from swift to cpp, the type that we receive in cpp is 'swift::string'.
I wanted to convert 'swift::string' to char* in cpp. Any help on how can this be achieved in cpp?
In the mac general setting, we can provide the language preference for an individual application like in the image below, I have provided for TextEdit app.
Now based on the system preferred languages, TextEdit will have a default language. However if I explicitly set the language for TextEdit (arabic in my example), then the application will use that language. I wanted to identify in my program the language that an app is currently running in. I have tried the below code, but it always return 'en', even after my preference is set to 'Arabic'.
let u = URL(fileURLWithPath: "/System/Applications/TextEdit.app")
let b = Bundle(url: u)!
let textedit_preference = b.preferredLocalizations
print(textedit_preference) //["en"]
How can I identify what language is being set by the user for an individual application? I have followed this link but it does not contain this information.
I wanted to perform handling for the exception in my mac and ios application, I am following this link, where it is suggested to follow either the mach exception handling or use Unix signals. I did not find many resources that could be followed to implement mach exception as suggested. Below are the few resources I could find. Can someone point to the some documentation that apple provides for this or some other helpful documentation.
https://gist.github.com/rodionovd/01fff61927a665d78ecf
I wanted to identify the shutdown event in macOS, so that If my application is running and the user performs a system shutdown then my application could be notified of the shutdown event and perform finalization.
I came across NSWorkspaceWillPowerOffNotification which is exactly what I require, however, I created a sample application to observe for this notification. Is is observed that right before the system shuts down, the OS terminates my application invoking applicationWillTerminate(_:) delegate and the observer method for 'NSWorkspaceWillPowerOffNotification' is not invoked.
I could perform my finalization in the applicationWillTerminate, but I wanted to know why is the observer not getting invoked. Also why is NSWorkspaceWillPowerOffNotification, even provided by apple when it invoked the termination delegate before shutdown?
below is how I m adding the observer:
NotificationCenter.default.addObserver(forName: NSWorkspace.willPowerOffNotification, object: nil, queue: nil, using: AppDelegate.handlePowerOffNotification)
Below is my observer function, which just logs:
public static func handlePowerOffNotification(_ notification: Notification) {
NSLog (AppDelegate.TAG + "System will power off soon! Perform any necessary cleanup tasks.")
// custom logger to log to a file
TWLog.Log ("System will power off soon! Perform any necessary cleanup tasks.")
}
I have added an "App Intents Extension" target to my main application in macOS. This generated the below two files:
TWAppIntent.swift
import AppIntents
struct TWAppIntent: AppIntent {
static var title: LocalizedStringResource = "TWAppIntentExtension"
static var parameterSummary: some ParameterSummary {
Summary("Get information on \(\.$TWType)")
}
//launch app on running action
static var openAppWhenRun: Bool = true
// we can have multiple parameter of diff types
@Parameter(title: "TWType")
var TWType: String
func perform() async throws -> some IntentResult & ReturnsValue<String> & ProvidesDialog {
return .result(value: TWType, dialog: "Logged break.")
}
}
TWAppIntentExtension.swift
import AppIntents
@main
struct TWAppIntentExtension: AppIntentsExtension {
}
I m able to build the extension target and I my intent action is available in the shortcuts app. However, on launching a shortcut with the above created intent action. I m getting the below popups:
I have identified what is causing this error. Setting the openAppWhenRun to true is causing this error. I don't get this when it is set to false. This property is supposed to launch the application, but can someone help me understand why is it happening? This is only causing the error when using this property for AppIntent Extension and not for In app handling for the AppIntent.
Can we not launch our application from AppIntent extension?
I have created a NSView inside the NSWindow. I m trying to identify when the view gets clicked by the user. For this I m using NSClickGestureRecognizer, but the registered method is not getting invoked. I have tried adding this for other widgets like button but it does not work either. Am I missing something?
class SelectionList :NSObject, NSTextFieldDelegate{
let containerView = NSView()
func createSelectionList (pWindow: NSWindow) {
// created container View
...
let clickRecognizer = NSClickGestureRecognizer()
clickRecognizer.target = self
clickRecognizer.buttonMask = 0x2 // right button
clickRecognizer.numberOfClicksRequired = 1
clickRecognizer.action = #selector(ClickGestured)
containerView .addGestureRecognizer(clickRecognizer)
}
@objc
func clickRecognizer() {
print("clicked")
}
}
I’m trying to detect a double-tap action on a UIButton. There seem to be two possible approaches:
Using a UITapGestureRecognizer with numberOfTapsRequired = 2.
Using the .touchDownRepeat event of UIControl.Event.
What is the recommended approach for reliably handling double-taps on UIButton? Are there any practical differences in terms of behavior, performance, or best practices between these two methods?
Additionally, I noticed that UIControl.Event defines a large set of events (like .editingChanged, .valueChanged, etc.).
Can all these events be applied to any UIControl subclass such as UIButton, or are they only valid for specific controls like UITextField, UISlider, etc.?
If not all events are supported by all controls, what is the rationale behind exposing them under a shared UIControl.Event enum?
Thanks in advance!
I have a question about how UIKit expects us to handle interaction events at scale.
From what I understand so far:
For UIControls (UIButton, UISwitch, UITextField, etc.), we explicitly register with addTarget(_:action:for:).
For gestures, we add UIGestureRecognizer instances to views.
For UIView subclasses, we can override touch methods like touchesBegan/touchesEnded.
All of this must be done on the main thread, since UIKit isn’t thread-safe.
Now here’s my main concern
If I have a complex UI with hundreds or thousands of widgets, am I expected to perform these registrations individually for each widget and each high-level event (tap, long press, editing changed, etc.)?
Or does UIKit provide a more centralized mechanism?
In short: Is per-widget, per-event registration the “normal” UIKit approach, or are there best practices for scaling event handling without writing thousands of addTarget or addGestureRecognizer calls?
Thanks!
I m trying to identify if my launched process is running on a local mac machine(desktop/laptop) or a virtual macOS X instance like AWS EC2, Azure, MacStadium etc.
I am using the below check for this:
1 . If running on native Apple hardware, the returned value contains the model name of the hardware:
$ sysctl -n hw.model
Macmini8,1
On virtualized hardware, the value may contain the hypervisor name:
$ sysctl -n hw.model
VMware7,0
If the command output doesn't contain the "Mac" substring, the malware considers that it is running in a virtual machine.
2. Checking USB device vendor names
The commands used:
ioreg -rd1 -c IOUSBHostDevice | grep "USB Vendor Name"
Sample output on native Apple hardware:
"USB Vendor Name" = "Apple Inc."
"USB Vendor Name" = "Apple Inc."
"USB Vendor Name" = "Apple, Inc."
On virtualized hardware, the value may contain the hypervisor name:
"USB Vendor Name" = "VirtualBox"
"USB Vendor Name" = "VirtualBox"
A virtual machine can be detected by checking if the command output contains a hypervisor name, for example "VirtualBox", "VMware", etc.
3 . Checking the "IOPlatformExpertDevice" registry class
The command used:
ioreg -rd1 -c IOPlatformExpertDevice
The following fields of the IOPlatformExpertDevice class can be checked in order to detect a virtual machine:
I wanted to know can a combination of these be used to identify a process running on a Cloud VM with certainity?
I have created a universal link which I m using for two different applications in macOS. Below is the apple-app-site-association file for the same:
{
"applinks":{
"apps":[],
"details":[
{
"appID":"E5R4JF6D5K.com.demo.App1",
"paths":[
"/app1/*"
]
},
{
"appID":"E5R4JF6D5K.world.demo.App2",
"paths":[
"/app2/*"
]
}
]
}
}
After creating this file in my server and providing the same Url in my associated domain capability for both my application, when I try to launch my applications using the link then only the first application is getting launched everytime. The second application is never launched.
for Url with http://custom.com/app1/... it redirects to first app.
for Url with http://custom.com/app2/... it redirects to browser.
I tried uninstalling the first app, but then it always directs in browser.
I tried a separate url for both apps, and it works fine.
I m not able to figure out the problem. The apple documentation says that it is possible to have two application linked to a common domain. Any help?