I've watched the video of WWDC 2019 session 702, System Extensions and DriverKit, and I'm still a little puzzled.For instance, what's the point of USBDriverKit, that is, why would I use it in preference to the already extant user-mode USB APIs? The demo shows an extension that does nothing - it logs to the debugger, but it doesn't provide any services to multiple clients in the system. In a KEXT, those services are provided by publishing them in the IORegistry; they provide well-known interfaces in the kernel to which a well-known user client can connect. If my extension ships in my own app, and provides services only to that app, I may as well implement the extension's functions directly in my app.How does my app (or more importantly, a third-party app) communicate with my dext? That wasn't covered in session 702. Neither was the case of replacing or augmenting an existing system driver, for example filtering the data passing through a USB mass storage driver, based on sideband data which the standard system driver cannot convey. For a kext, I would simply call IORegisterService and the rest of the stack would be build on top of my driver.Is the sample code for the demo of session 702 available? Any other sample code for DriverKit?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
For some time I've been sharing an internal macOS app with my colleagues by simply building it locally, zipping it up and emailing, or sharing on Slack or Teams.
In the Target Settings in Xcode, Signing and Capabilities, the Team is set to my company, the Signing Certificate is set to Development (not "Sign to run locally").
This has worked for some time. None of the recipients complained that they couldn't run the app. Of course it is not notarized so they need to right-click and select Open the first time around.
When I examine the signature of the app I distribute this way, using `codesign -dvvv, the signing authority is me (not my company).
One of my colleagues recently migrated to a new Mac Mini M1. On this Mac, when attempting to open the app, he saw the "you do not have permission to open the application" alert. He's supposed to consult his sys admin (himself).
I fixed the problem by Archiving a build and explicitly choosing to sign it using the company's Developer ID certificate. The version produced this way has a signing authority of my company, not me, and my colleague can run it.
Does anyone know why my previous builds work on other machines for other users? It appears that the locally-built app was actually signed by my personal certificate, although Xcode's UI said it would be signed by my company - but it didn't only work for me?
What is the expected behavior if you try to open an app signed with a personal certificate on a machine owned by a different person? Should Security & Privacy offer the option of approving that particular personal certificate?
In my keychain, I have one Developer ID Application certificate, with a private key, for my Team.
In Xcode's Accounts/Manage Certificates dialog, there are three Developer ID Application certificates, two of which have a red 'x' badge and the status 'missing private key'.
I can right click on any of those three entries and my only enabled choice is "Export". Email creator or Delete are disabled. Why?
In my Team's account, there are indeed three Developer ID Application certificates, with different expiration dates, but I only have the private key for one of them.
By choosing Manual signing, I can choose a specific certificate from my keychain, but Xcode 13.2.1 tells me that this certificate is missing its private key - but I can see that private key in my keychain!
I'm trying to make a DEXT target within my project. It compiles and links fine if I build just its own scheme.
However, if I build my app's target, which includes the DEXT as a dependency, the build fails when linking the DEXT.
The linker commands are different in the two cases. When built as part of the larger project, the DEXT linker command includes -fsanitize\=undefined. This flag is absent when I build using the DEXT's scheme alone. I searched the .pbxproj for "sanitize" - it doesn't appear, so it looks like Xcode is adding this flag.
The linker failure is this:
File not found: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/13.1.6/lib/darwin/libclang_rt.ubsan_driverkit_dynamic.dylib
The only files with "driver kit" in their name in that directory are these two:
libclang_rt.cc_kext_driverkit.a
libclang_rt.driverkit.a
The successful link command includes this directive:
-lc++ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/13.1.6/lib/darwin/libclang_rt.driverkit.a
while the unsuccessful link command includes this one:
-lc++ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/13.1.6/lib/darwin/libclang_rt.ubsan_driverkit_dynamic.dylib
I tried adding -fno-sanitize=undefined to the OTHER_LINKER_FLAGS for the DEXT target, hoping that this would cancel the effect of the previous -fsanitize, but then I get undefined symbol errors:
Undefined symbol: ___ubsan_handle_shift_out_of_bounds
Undefined symbol: ___ubsan_handle_type_mismatch_v1
These appear to be referred to by the macros used in the iig magic.
I'm using Xcode 13.4.1 (13F100).
Does anyone know how I can fix this?
Does anyone know why this crashes, or could anyone tell me how to restructure this code so it doesn't crash.
(this is FB11917078)
I have a view which displays two nested rectangles of a given aspect ratio (here 1:1). The inner rectangle is a fixed fraction of the outer rectangle's size.
When embedded in a List, if I rapidly resize the window, the app crashes.
If the View is not in a List, there's no crash (and the requested aspect ratio is not respected, which I don't yet know how to fix).
Here's the code for the ContentView.swift file. Everything else is a standard macOS SwiftUI application template code from Xcode 14.2.
import SwiftUI
struct ContentView: View {
@State var zoomFactor = 1.2
var body: some View {
// rapid resizing of the window causes a crash,
// if the TwoRectanglesView is not embedded in a
// List, there is no crash
List {
ZStack {
Rectangle()
TwoRectanglesView(zoomFactor: $zoomFactor)
}
}
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
struct TwoRectanglesView: View {
@State private var fullViewWidth: CGFloat?
@Binding var zoomFactor: Double
private let aspectRatio = 1.0
var body: some View {
ZStack {
Rectangle()
.aspectRatio(aspectRatio, contentMode: .fit)
GeometryReader { geo in
ZStack {
Rectangle()
.fill(.black)
.border(.blue)
Rectangle()
.fill(.red)
.frame(width:geo.size.width/zoomFactor,
height: geo.size.height/zoomFactor)
}
}
}
}
}
struct TwoRectanglesView_Previews: PreviewProvider {
@State static var zoomFactor = 3.1
static var previews: some View {
TwoRectanglesView(zoomFactor: $zoomFactor)
}
}
I would like to use a DisclosureGroup in a VStack on macOS, but I'd like it to look like a DisclosureGroup in a List. (I need to do this to work around a crash when I embed a particular control in a List).
I'll append some code below, and a screenshot.
You can see that a List background is white, not grey. The horizontal alignment of the disclosure control itself is different in a List. In a List, the control hangs to the left of the disclosure group's content, so the content is all aligned on its leading edge. Inside a VStack, my VStack with .leading horizontal alignment places the DisclosureGroup so that its leading edge (the leading edge of the disclosure control) is aligned to the leading edge of other elements in the VStack. The List is taking account of the geometry of the disclosure arrow, while the VStack does not.
The vertical alignment of the disclosure triangle is also different - in a VStack, the control is placed too high.
And finally, in a VStack, the disclosure triangle lacks contrast (its RGB value is about 180, while the triangle in the List has an RGB value of 128).
Does anyone know how to emulate the appearance of a DisclosureGroup in a List when that DisclosureGroup is embedded in a VStack?
here's my ContentView.swift
struct ContentView: View {
var body: some View {
HStack {
List {
Text("List")
DisclosureGroup(content: {
Text("content" )},
label: {
Text("some text")
})
}
VStack(alignment: .leading) {
Text("VStack")
DisclosureGroup(content: {
Text("content" )},
label: {
Text("some text")
})
Spacer()
}
.padding()
}
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
when I'm not yet logged in to the forums, some text blocks look like this:
once I'm logged in, the same text block looks like this:
This is on Ventura 13.2.1 with Safari Version 16.3 (18614.4.6.1.6)
Does anyone else experience this or is just me?
My goal is to implement a moving background in a virtual camera, implemented as a Camera Extension, on macOS 13 and later. The moving background is available to the extension as a H.264 file in its bundle.
I thought i could create an AVAsset from the movie's URL, make an AVPlayerItem from the asset, attach an AVQueuePlayer to the item, then attach an AVPlayerLooper to the queue player.
I make an AVPlayerVideoOutput and add it to each of the looper's items, and set a delegate on the video output.
This works in a normal app, which I use as a convenient environment to debug my extension code. In my camera video rendering loop, I check self.videoOutput.hasNewPixelBuffer , it returns true at regular intervals, I can fetch video frames with the video output's copyPixelBuffer and composite those frames with the camera frames.
However, it doesn't work in an extension - hasNewPixelBuffer is never true. The looping player returns 'failed', with an error which simply says "the operation could not be completed". I've tried simplifying things by removing the AVPlayerLooper and using an AVPlayer instead of an AVQueuePlayer, so the movie would only play once through. But still, I never get any frames in the extension.
Could this be a sandbox thing, because an AVPlayer usually renders to a user interface, and camera extensions don't have UIs?
My fallback solution is to use an AVAssetImageGenerator which I attempt to drive by firing off a Task for each frame each time I want to render one, I ask for another frame to keep the pipeline full. Unfortunately the Tasks don't finish in the same order they are started so I have to build frame-reordering logic into the frame buffer (something which a player would fix for me). I'm also not sure whether the AVAssetImageGenerator is taking advantage of any hardware acceleration, and it seems inefficient because each Task is for one frame only, and cannot maintain any state from previous frames.
Perhaps there's a much simpler way to do this and I'm just missing it? Anyone?
I am developing a CMIO Camera Extension on macOS Ventura.
Initially, I based this on the template camera extension (which creates its own frames). Later, I added a sink stream so that I could send the extension video from an app. That all works.
Recently, I added the ability for the extension itself to initiate a capture session, so that it can augment the video from any available AVCaptureDevice without running its controlling app. That works, but I have to add the Camera capability to the extension's sandbox configuration, and add a camera usage string.
This caused the OS to put up the user permission dialog, asking for permission to use the camera. However, the dialog uses the extension's bundle ID for its name, which is long and not user friendly. Furthermore, the extension isn't visible to the user (it is packaged inside the app which installs and controls it), so even a user-friendly name doesn't make that much sense to the end user.
I tried adding a CFBundleDisplayName to the extension's plist, but the OS didn't use it in the permissions dialog.
Is there a way to get the OS to present a more user-friendly name?
Should I expect to see a permissions dialog pertaining to the extension at all?
Where does the OS get the name from?
After the changes (Camera access, adding a camera usage string), I noticed that the extension's icon (the generic extension icon) showed up in the dock, with its name equal to its bundle ID.
Also, in Activity Monitor, the extension's process is displayed, using its CFBundleDisplayName (good). But about 30s after activation, the name is displayed in red, with " (not responding)" appended, although it is still working.
The extension does respond to the requests I send it over the CMIO interface, and it continues to process video, but it isn't handling user events, while the OS thinks that it should, probably because of one or more of the changes to the plist that I have had to make.
To get the icon out of the dock, I added LSUIElement=true to its plist. To get rid of the red "not responding", I changed the code in its main.swift from the template. It used to simply call CFRunLoopRun(). I commented out that call and instead make this call
_ = NSApplicationMain(CommandLine.argc, CommandLine.unsafeArgv)
That appears to work, but has the unfortunate side effect of increasing the CPU usage of the extension when it is idle from 0.3% to 1.0%.
I do want the extension to be able to process Intents, so there is a price to be paid for that. But it doesn't need to do so until it is actively dealing with video.
Is there a way to reduce the CPU usage of a background app, perhaps dynamically, making a tradeoff between CPU usage and response latency?
Is it to be expected that a CMIOExtension shows up in the Dock, ever?
How does one get the list of controls which a CMIOObject has to offer?
How do the objects in the CMIO hierarchy map to CMIOExtension objects?
I expected the hierarchy to be something like this:
the system has owned objects of type:
'aplg' `(kCMIOPlugInClassID)` has owned objects of type
'adev' `(kCMIODeviceClassID,` which may have owned objects of type
'actl' `(kCMIOControlClassID)` and has at least one owned object of type
'astr' `(kCMIOStreamClassID),` each of which may have owned objects of type
'actl' `(kCMIOControlClassID)`
Instead, when I recursively traverse the object hierarchy, I find the devices and the plug-ins at the same level (under the system object). Only some of the device in my system have owned streams, although they all have a kCMIODevicePropertyStreams ('stm#') property.
None of the devices or streams appear to have any controls, and none of the streams have any owned objects. I'm not using the qualifier when searching for owned objects, because the documentation implies that it may be nil if I'm not interested in narrowing my search.
Should I expect to find any devices or streams with controls? And if so, how do I get a list of them? CMIOHardwareObject.h says that "Wildcards... are especially useful ...for querying an CMIOObject's list of CMIOControls. ", but there's no example of how to do this.
My own device (from my camera extension) has no owned objects of type stream. I don't see any API call to convey ownership of the stream I create by the device it belongs to. How does the OS decide that a stream is 'owned' by a device?
I've tried various scopes and elements - kCMIOObjectPropertyScopeGlobal, kCMIOObjectPropertyScopeWildcard, kCMIOControlPropertyScope, and kCMIOObjectPropertyElementMain, kCMIOObjectPropertyElementWildcard and kCMIOControlPropertyElement. I can't get a list of controls using any of these.
Ultimately, I'm trying to find my provider, my devices and my streams using the CMIO interface, so that I can set and query properties on them. Is it reasonable to assume that the CMIOObject of type 'aplg' is the one corresponding to a CMIOExtensionProviderSource?
This is on Ventura 13.4.1 on M1.
I'd like to support multiple windows in my macOS app, which provides previews of cameras in the system, using the SwiftUI app life cycle, on macOS 13.5.2 and later.
I can make multiple window without any problem, using the default behavior of WindowGroup and the File/New menu item.
WindowGroup(id: "main-viewer", for: String.self) { $cameraUniqueID in
ContentView(cameraUniqueID: cameraUniqueID)
I can make a specific window on a camera using the .openWindow environment variable:
.openWindow(id: "main-viewer", value:someSpecificCameraID)
What I would like to be able to do is change the 'value' of my window at run time. When a user chooses "New Window", they get a window with a view of the first (or default) camera in it. They can then choose another camera to show in that window. I would like to be able to persist the chosen camera and the position and size of that window (originally opened with File/New Window).
Windows opened with New Window are always opened with a nil value.
Windows opened with .openWindow have their size and content saved, but I don't want to add UI to open specific windows. I want to open a generic window, then specify what camera it is looking at, move and resize it, and I'd like to save that window state.
Is this possible, or am I holding SwiftUI wrong?
I'm struggling to build a driver for iPadOS in a particular project configuration.
If I put the driver code and dext target into the same Xcode project which contains the iPad app, all is well. This is the way the Xcode driver template does it.
However, I'd like to build and debug the dext on macOS, while eventually deploying on iPadOS. So I put the dext into a different project, which has a macOS target, a minimal iPadOS target and a DriverKit target.
I made a workspace which contains both projects. I dragged the macOS project into the iPadOS project so that I can refer to the products of the macOS project (specifically, its driver target) as a dependency of the iPadOS target.
Note that the main iPad app target depends on the driver target.
So the workspace organization looks like this:
Workspace
iPad project
main iPad app target (depends on driver)
test project reference
test project
test macOS/iPad app target
DriverKit dext target
When I build the iPadOS target, it builds the dependent driver target in the macOS project, but it fails to link because Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/15.0.0/lib/darwin/libclang_rt.profile_driverkit.a is not found.
If I just build the driver target directly in Xcode, there is no such complaint.
I looked closely at the build logs, and I see for the failed link, there are these two linker flags set which are not set in the successful case
-debug_variant
-fprofile-instr-generate
I can't seem to control the generation of this flag. I tried turning off the Profile switch in the Scheme editor for the driver, but is makes no difference. When I directly build the driver target, no -fprofile-instr-generate is set and it compiles and links. When i build the driver as a dependency of another target, -fprofile-instr-generate is passed to the linker, which fails.
The obvious workaround is to put the driver source code into a separate driver target in the iPadOS project, but I'd rather have just one DriverKit driver for both platforms, with a few settings (such as bundle ID) controlled by a configuration file. Has anyone else encountered this problem, and know of a workaround?
Can anyone advise, or give example of, communicating large (>128 byte) incoming buffers from a dext to a user-space app?
My specific situation is interrupt reads from a USB device. These return reports which are too large to fit into the asyncData field of an AsyncCompletion call. Apple's CommunicatingBetweenADriverKitExtensionAndAClientApp sample shows examples of returning a "large" struct, but the example is synchronous.
The asynchronous example returns data by copying into a IOUserClientAsyncArgumentsArray, which isn't very big.
I can allocate a single buffer larger than 4K in user space, and communicate that buffer to my driver as an IOMemoryDescriptor when I set up my async callback. The driver retains the descriptor, maps it into its memory space and can thus write into it when the hardware returns interrupt data. The driver then calls AsyncCompletion, which will cause my user-side callback to be called, so the user side software knows that there's new data available in the previously allocated buffer.
That's fine, it works, but there are data race problems - since USB interrupt reads complete whenever the hardware has provided data, incoming completions happen at unpredictable times, so the shared buffer contents could change while the user side code is examining them.
Is there an example somewhere of how to deal with this? Can I allocate memory on the driver side on demand, create an IOMemoryDescriptor for it and return that descriptor packed inside the asyncData? If so, how does the driver know when it can relinquish that memory? I have a feeling there's something here I just don't understand...
this is a repost with more appropriate tags. The original is here:
https://developer.apple.com/forums/thread/744268
Can anyone advise, or give example of, communicating large (>128 byte) incoming buffers from a dext to a user-space app?
My specific situation is interrupt reads from a USB device. These return reports which are too large to fit into the asyncData field of an AsyncCompletion call. Apple's CommunicatingBetweenADriverKitExtensionAndAClientApp sample shows examples of returning a "large" struct, but the example is synchronous. The asynchronous example returns data by copying into a IOUserClientAsyncArgumentsArray, which isn't very big.
I can allocate a single buffer larger than 4K in user space, and communicate that buffer to my driver as an IOMemoryDescriptor when I set up my async callback. The driver retains the descriptor, maps it into its memory space and can thus write into it when the hardware returns interrupt data. The driver then calls AsyncCompletion, which will cause my user-side callback to be called, so the user side software knows that there's new data available in the previously allocated buffer.
That's fine, it works, but there are data race problems - since USB interrupt reads complete whenever the hardware has provided data, incoming completions happen at unpredictable times, so the shared buffer contents could change while the user side code is examining them.
Is there an example somewhere of how to deal with this? Can I allocate memory on the driver side on demand, create an IOMemoryDescriptor for it and return that descriptor packed inside the asyncData? If so, how does the driver know when it can relinquish that memory? I have a feeling there's something here I just don't understand...
I recently built an update to one of our apps, which installs a driver extension.
The new version won't launch on my Mac, Finder says it "can't be opened".
I captured the logs, which say "no matching profile found":
error 2024-01-10 14:36:03.306061 -0800 taskgated-helper <app-bundle-id>: Unsatisfied entitlements: com.apple.developer.system-extension.install, com.apple.developer.team-identifier
info 2024-01-10 14:36:03.306279 -0800 amfid Requirements for restricted entitlements failed to validate, error -67671, requirements: '<private>'
error 2024-01-10 14:36:03.306287 -0800 amfid Restricted entitlements not validated, bailing out. Error: Error Domain=AppleMobileFileIntegrityError Code=-413 "No matching profile found" UserInfo={NSURL=<private>, unsatisfiedEntitlements=<private>, NSLocalizedDescription=No matching profile found}
default 2024-01-10 14:36:03.306432 -0800 amfid /Applications/<app-bundle-id>/Contents/MacOS/<app-name> not valid: Error Domain=AppleMobileFileIntegrityError Code=-413 "No matching profile found" UserInfo={NSURL=file:///Applications/C<escaped-app-name>/, unsatisfiedEntitlements=<CFArray 0x14f3041d0 [0x1dd7d39a0]>{type = immutable, count = 2, values = (
0 : <CFString 0x14f3055a0 [0x1dd7d39a0]>{contents = "com.apple.developer.system-extension.install"}
1 : <CFString 0x14f304130 [0x1dd7d39a0]>{contents = "com.apple.developer.team-identifier"}
)}, NSLocalizedDescription=No matching profile found}
default 2024-01-10 14:36:03.306514 -0800 kernel AMFI: bailing out because of restricted entitlements.
default 2024-01-10 14:36:03.306523 -0800 kernel mac_vnode_check_signature: /Applications/<app-bundle-id>/Contents/MacOS/<app-name>: code signature validation failed fatally: When validating /Applications/<app-bundle-id>/Contents/MacOS/<app-name>:
Code has restricted entitlements, but the validation of its code signature failed.
Unsatisfied Entitlements: com.apple.developer.system-extension.installcom.apple.developer.team-identifier
The thing is, when I run this command
codesign -v -vvv <path-to-app>
the app is valid on disk and satisfies its Designated Requirement
and these two commands:
codesign --display --entitlements - security cms -D -i <path-to-app>/Contents/embedded.provisionprofile
when run against the old app (which works) and the new app (which doesn't) have absolutely identical outputs. The certificates haven't expired yet.
Where else should we be looking to figure out where we've messed up? We know we changed the signing and notarization flow; the working build was made by a person using Xcode, the new app was built, signed and notarized using the command line tools (xcodebuild and notarytool).