Can anyone advise, or give example of, communicating large (>128 byte) incoming buffers from a dext to a user-space app?
My specific situation is interrupt reads from a USB device. These return reports which are too large to fit into the asyncData field of an AsyncCompletion call. Apple's CommunicatingBetweenADriverKitExtensionAndAClientApp sample shows examples of returning a "large" struct, but the example is synchronous.
The asynchronous example returns data by copying into a IOUserClientAsyncArgumentsArray, which isn't very big.
I can allocate a single buffer larger than 4K in user space, and communicate that buffer to my driver as an IOMemoryDescriptor when I set up my async callback. The driver retains the descriptor, maps it into its memory space and can thus write into it when the hardware returns interrupt data. The driver then calls AsyncCompletion, which will cause my user-side callback to be called, so the user side software knows that there's new data available in the previously allocated buffer.
That's fine, it works, but there are data race problems - since USB interrupt reads complete whenever the hardware has provided data, incoming completions happen at unpredictable times, so the shared buffer contents could change while the user side code is examining them.
Is there an example somewhere of how to deal with this? Can I allocate memory on the driver side on demand, create an IOMemoryDescriptor for it and return that descriptor packed inside the asyncData? If so, how does the driver know when it can relinquish that memory? I have a feeling there's something here I just don't understand...
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
We have a legacy app written in a mix of C, ObjC, C++ and ObjC++ with .xib files. It is not sandboxed.
It sends an Apple Event to TV (the app of that name from Apple, not a physical TV) using /usr/bin/osascript, calling a compiled Apple Script which is in our app bundle's Resources directory with parameters which we generate in our app at runtime. The first time it does this on a fresh system, the OS puts up a dialog asking for permission to control TV, and after the user clicks Allow, our app appears under Security and Privacy in the Automation section.
That's all fine, but what is unexpected is that the app has no Apple Events entitlement (com.apple.security.automation.apple-events), and it doesn't have a NSAppleEventsUsageDescription string either. The documentation at https://developer.apple.com/documentation/bundleresources/entitlements/com_apple_security_automation_apple-events says
Your app doesn’t need the Apple Events Entitlement if it only sends Apple events to itself or to other processes signed with the same team ID.
but we're not on the Apple team.
When I filter the log for messages from tccd pertaining to our app, it does indeed complain :
Prompting policy for hardened runtime; service: kTCCServiceAppleEvents requires entitlement com.apple.security.automation.apple-events but it is missing for accessing={TCCDProcess: identifier=<our bundle id>”
But despite those complaints, everything works - I can send the event, and TV acts upon it. Is this working only by accident, and might fail in some minor future OS update?
tccd also complains about the microphone
Prompting policy for hardened runtime; service: kTCCServiceMicrophone requires entitlement com.apple.security.device.audio-input but it is missing for requesting={TCCDProcess: identifier=<our bundle ID>
but we don't use the microphone
tccd complains about this too
<path-to-our-app> attempted to call TCCAccessRequest for kTCCServiceAccessibility without the recommended com.apple.private.tcc.manager.check-by-audit-token entitlement
What does that mean, and should we be concerned?
I have an Xcode project with a main app target, and a dext target which builds a DriverKit driver which is embedded in the main app.
That all works, if I build the DriverKit target first, then switch to the app target and build that. The app and the driver work.
If I make the Driver target a dependency of the App target, building the Driver fails because a header is not found, thus building the app fails.
This doesn't make much sense - why does building a target as a consequence of dependence on another target produce a different result from building the same target manually?
Has anyone else seen behavior like this? Have any hints on how to fix it?
I've tried comparing the detailed build logs, but they don't shed much light - the lines are very long and the build steps appear to be executed in a different order. One strange thing I notice is that although I am building on an M1 Mac, with "build active architectures only" set to YES for both targets, in the Driver-target-only case, Driver.cpp is compiled for arm64, while in the failure case, Driver.cpp is compiled for x86_64. That doesn't make any sense to me either.
I've alluded to this before in these posts
and there are some posts from others about this, e.g.
https://developer.apple.com/forums/thread/759845
and I've filed some bugs related to the behavior.
FB20212935
FB19451832
FB19450508
FB19450162
FB19449747
Our company owns the USB vendor IDs X and Y . We've been granted a USB transport entitlement for both of those IDs.
The crux of the problem is that I want to build a driver for USB vendor ID Y. Xcode's well-hidden auto-generated provisioning profile for my driver contains
com.apple.developer.driverkit.transport.usb:
{
idVendor = X;
}
which is obviously not what I want. Xcode fails to provision the target.
But I have another, much older project with an auto-generated provisioning profile containing
com.apple.developer.driverkit.transport.usb:
{
idVendor = X;
}, {
idVendor = Y;
}
I can build a driver for idVendor Y without problems in this project. But that doesn't help me with my new project.
What can I do to fix this? Do I need to request our entitlements again? I fear if I do so, something will get lost in the process. Is there a way to inspect what we have already been granted? - I can't see a "managed entitlements" section on the account portal. I can go through the motions of making a new App ID, then I can see that some Capability Request have been "Assigned", but I don't see what their values are.
A second question I have is about the userclient-access entitlement. Are these tied to the bundle ID of the app which claims the access? In other words, if I have two drivers
com.mycompany.app1.driver1
com.mycompany.app2.driver2
and I would like to have com.mycompany.app1 communicate with com.mycompany.app1.driver1, I would ask for the com.apple.developer.driverkit.userclient-access capability for com.mycompany.app1.driver1. But must I request that access for each specific app bundle ID that will talk to that driver, or once the entitlement is granted, can I use com.apple.developer.driverkit.userclient-access = { com.mycompany.app1.driver1 } in any of my apps?
I've watched the video of WWDC 2019 session 702, System Extensions and DriverKit, and I'm still a little puzzled.For instance, what's the point of USBDriverKit, that is, why would I use it in preference to the already extant user-mode USB APIs? The demo shows an extension that does nothing - it logs to the debugger, but it doesn't provide any services to multiple clients in the system. In a KEXT, those services are provided by publishing them in the IORegistry; they provide well-known interfaces in the kernel to which a well-known user client can connect. If my extension ships in my own app, and provides services only to that app, I may as well implement the extension's functions directly in my app.How does my app (or more importantly, a third-party app) communicate with my dext? That wasn't covered in session 702. Neither was the case of replacing or augmenting an existing system driver, for example filtering the data passing through a USB mass storage driver, based on sideband data which the standard system driver cannot convey. For a kext, I would simply call IORegisterService and the rest of the stack would be build on top of my driver.Is the sample code for the demo of session 702 available? Any other sample code for DriverKit?
I added a Camera Extension to my app, using the template in Xcode 13.3.1. codesign tells me that the app and its embedded system extension are correctly signed, their entitlements seem to be okay. But when I submit an activation request for the extension, it returns with this failure:
error: Error Domain=OSSystemExtensionErrorDomain Code=9 "(null)"
localized failure reason: (null)
localizedDescription: The operation couldn’t be completed. (OSSystemExtensionErrorDomain error 9.)
localizedRecoverySuggestion: (null)
What could be the reason? code 9 appears to mean a "validation error", but how do I figure out what is invalid?
My goal is to implement a moving background in a virtual camera, implemented as a Camera Extension, on macOS 13 and later. The moving background is available to the extension as a H.264 file in its bundle.
I thought i could create an AVAsset from the movie's URL, make an AVPlayerItem from the asset, attach an AVQueuePlayer to the item, then attach an AVPlayerLooper to the queue player.
I make an AVPlayerVideoOutput and add it to each of the looper's items, and set a delegate on the video output.
This works in a normal app, which I use as a convenient environment to debug my extension code. In my camera video rendering loop, I check self.videoOutput.hasNewPixelBuffer , it returns true at regular intervals, I can fetch video frames with the video output's copyPixelBuffer and composite those frames with the camera frames.
However, it doesn't work in an extension - hasNewPixelBuffer is never true. The looping player returns 'failed', with an error which simply says "the operation could not be completed". I've tried simplifying things by removing the AVPlayerLooper and using an AVPlayer instead of an AVQueuePlayer, so the movie would only play once through. But still, I never get any frames in the extension.
Could this be a sandbox thing, because an AVPlayer usually renders to a user interface, and camera extensions don't have UIs?
My fallback solution is to use an AVAssetImageGenerator which I attempt to drive by firing off a Task for each frame each time I want to render one, I ask for another frame to keep the pipeline full. Unfortunately the Tasks don't finish in the same order they are started so I have to build frame-reordering logic into the frame buffer (something which a player would fix for me). I'm also not sure whether the AVAssetImageGenerator is taking advantage of any hardware acceleration, and it seems inefficient because each Task is for one frame only, and cannot maintain any state from previous frames.
Perhaps there's a much simpler way to do this and I'm just missing it? Anyone?
I used to have a static library, I turned it into a dynamic library, which was more difficult than I expected.
I already had a small project, containing a command line tool which loads a dynamic library. This was created direct from Xcode 15.3's macOS app and dynamic library templates.
This is how I configured the Dynamic Library Install Name Base to @rpath. Note that there is nothing in the Dynamic Library Install Name target setting, but it resolves to @rpath/libCLI_library.dylib.
If I click on it once, I can see the value
I click on it again to see how this value is generated
The expression it is generated from is $(LD_DYLIB_INSTALL_NAME_$(LLVM_TARGET_TRIPLE_VENDOR):default=$(EXECUTABLE_PATH))
Using a Run Script phase I can see the environment variables, which include
LLVM_TARGET_TRIPLE_VENDOR = “apple”, and LD_DYLIB_INSTALL_NAME_apple = “@rpath/libCLI_.dylib”
If I look at the environment variables in my other dylib project, which started life as a static library, although LLVM_TARGET_TRIPLE_VENDOR is set to "apple", there is no LD_DYLIB_INSTALL_NAME_apple variable at all, so even if I paste the expression above into the LD_DYLIB_INSTALL_NAME setting, it does me no good, because it evaluates to EXECUTABLE_PATH, which is libXXX.dylib, but I'd like @rpath/libXXX.dylib.
So my question are,
where does LD_DYLIB_INSTALL_NAME_apple come from?
where does the magic invisible expression for Dynamic Library Install Name come from?
the quick help for Dynamic Library Install Name mentions "if this option is not specified, the -o path will be used" - what build setting is that?
I have a USBDriverKit driver which tries to send vendor-specific DeviceRequests to the IOUSBHostDevice. A few short months ago this worked, and as far as I know, the only change since it last worked is an update to macOS 14.5.
The dext user client calls through to the driver, which wants to do this:
ivars->mDevice->Open()
ivars->mDevice->DeviceRequest()
ivars->mDevice->Close()
the problem is that the call to Open returns 0xe00002cde, which is kIOReturnNotOpen. If I don't call Open, but simply call DeviceRequest, I get the same result, kIOReturnNotOpen, which I would expect.
I'm pretty sure that in macOS 14.4, the call to Open() did not return an error. I haven't seen any system logs that clue me in to why Open() is failing. It would be handy if the error gave me some reason why the device cannot be opened.
Has anyone else seen this? I've searched high and low and I can't find any examples of dexts which actually do anything at all with a USB device.
We have an iPad app which can write to user-specified locations on USB-connected storage devices.
On unmanaged devices, this works just fine.
However, when the device is under MDM, although the Files app can see the external USB storage device, it does not show up in the file browser in our own app.
There's a restriction called "allowFilesUSBDriveAccess" which is set to true (the default), but there's no restriction called "allowOtherAppsUSBDriveAccess".
Are MDM-managed iPads simply not allowed to access USB drives (except through the Files app)?
Topic:
App & System Services
SubTopic:
Core OS
Tags:
Entitlements
Managed Settings
Device Management
The presentation "create audio drivers with DriverKit" from WWDC 2021 demonstrates how to use a dext to implement a virtual audio driver. It also says " If a virtual audio driver or device is all that is needed, the audio server plug-in driver model should continue to be used".
Indeed, in AudioDriverKit/AudioDriverKitTypes.h, there is no IOUserAudioTransportType Virtual, although CoreAudio/AudioHardwareBase.h includes kAudioDeviceTransportTypeVirtual.
For one of our products, we require virtual devices to implement a software loopback "cable". We've implemented this using the "traditional" HAL plugin, and as a proof-of-concept, also using a dext. In the dext, I tried setting the transport type to 'virt', which seems to only have the effect of changing the icon shown in Audio Midi Setup.
HAL plugins require an installer, and the installer has to kill coreaudiod in a post-install script. You have to turn off SIP to debug them. Just like AudioDriverKit drivers, they are out-of-process and run in a process not owned by the hosting app. Our HAL plugin's interface is property based; we had to write a lot of boiler-plate code to implement required properties. Writing an AudioDriverKit driver is in most respects easier - a lot of the scaffolding is implemented in the base driver, which we only alter where required. Debugging and installation is much easier.
The dext works just fine, as far as we can ascertain, just as well as a HAL plugin.
So, my question is - is the advice to use a HAL plugin for a virtual device still correct in 2025? And if so, what's the objection? We'd really prefer to ship the AudioDriverKit virtual audio device.
I'm posting this here after reading Quinn's post here: https://developer.apple.com/forums/thread/799000
The above entitlement is mentioned in IOUSBHostControllerInterface.h.
It isn't an entitlement one can add using the + button on the Capabilities panel in Xcode. If I try to add it by hand, Xcode complains that it isn't in my profile.
Is this a managed entitlement?
We'd like to create a local USB "device" to represent a real device reachable over a network.
I have a non-shipping internal test app which is macOS only. It uses AppKit and .xib files to describe the UI.
On Sonoma, the app renders with most of its UI quite blurry, as if a 10 pixel Gaussian blur were applied to it. The blur is applied to entire views, not just the text. It doesn't vary with screen resolution. I observed this behavior with one of the Sonoma betas but I think it went away when I re-launched the app - at any rate, I forgot about it.
I've updated my dev machine to the shipping Sonoma and the problem is extant. I opened up the .xib file in Xcode and the blurriness is visible there too. I haven't applied any effect layers to my UI.
Not all of the views in my UI are blurry.
Has anyone else seen this?
I have an Xcode project (generated from Qt) which is signed by a post-processing script.
It uses the invocation:
codesign -o runtime --sign "$(CODE_SIGN_IDENTITY)"
CODE_SIGN_IDENTITY is set to "Apple Development" in the Build Settings for the target.
The signing step fails with this complaint
Apple Development: ambiguous (matches "Apple Development: <my name> (an ID)" and "Apple Development: <my company email> (another ID)" in login.keychain-db)
It is true, I do have two Apple Development certificates. I thought one is for personal development (when you pick the personal team) and the other for company development (when I pick the company team).
I have other Xcode projects (built "by hand") which have CODE_SIGN_IDENTITY set to "Apple Development" and with Automatic signing turned on, and they build just fine, even though I have two certificates with common names beginning "Apple Development".
However, when I look at the build log of those regular Xcode projects, which are signed by Xcode rather than in a post-processing script, the Signing step logs this:
Signing Identity: Apple Development: (an ID)
not simply "Apple Development". Xcode seems to have resolved the ambiguity all on its own before calling codesign. It then calls codesign using the hash of the certificate as its identifier.
How can I emulate Xcode's behavior here? The postprocessing script runs on different developer's machines - they all have multiple "Apple Development" certificates, and they are all different from one another.
I've made a dext and a user client that overrides IOUserSCSIPeripheralDeviceType00, with the object of writing device firmware to the driver. I can gain and relinquish exclusive access to the device, I can call UserReportMediumBlockSize and get back a sensible answer (512).
I can build command parameters with the INQUIRY macro from IOUserSCSIPeripheralDeviceHelper.h and send that command successfully using UserSendCB, and I receive sensible-looking Inquiry data from the device.
However, what I really want to do is send a WriteBuffer command (opcode 0x3B), and that doesn't work. I have yet to put a bus analyzer on it, but I don't think the command goes out on the bus - there's no valid sense data, and the error returned is 0xe00002bc, or kIOReturnError, which isn't helpful.
This is the code I have which doesn't work.
kern_return_t driver::writeChunk(const char * buf, size_t atOffset, size_t length, bool lastOne)
{
DebugMsg("writeChunk %p at %ld for %ld", buf, atOffset, length);
SCSIType00OutParameters outParameters;
SCSIType00InParameters response;
memset(&outParameters, 0, sizeof(outParameters));
memset(&response, 0, sizeof(response));
SetCommandCDB(&outParameters.fCommandDescriptorBlock,
0x3B, // byte 0, opcode WriteBuffer command
lastOne ? 0x0E : 0x0F, // byte 1 mode: E=save deferred, F = download and defer save
0, // byte 2 bufferID
(atOffset >> 16), // byte 3
(atOffset >> 8), // byte 4
atOffset, // byte 5
(length >> 16), // byte 6
(length >> 8), // byte 7
length, // byte 8
0, // control, byte 9
0, 0, 0, 0, 0, 0); // bytes 10..15
outParameters.fLogicalUnitNumber = 0;
outParameters.fBufferDirection = kIOMemoryDirectionOut;
outParameters.fDataTransferDirection = kSCSIDataTransfer_FromInitiatorToTarget;
outParameters.fTimeoutDuration = 1000; // milliseconds
outParameters.fRequestedByteCountOfTransfer = length;
outParameters.fDataBufferAddr = reinterpret_cast<uint64_t>(buf);
uint8_t senseBuffer[255] = {0};
outParameters.fSenseBufferAddr = reinterpret_cast<uint64_t>(senseBuffer);
outParameters.fSenseLengthRequested = sizeof(senseBuffer);
kern_return_t retVal = UserSendCDB(outParameters, &response);
return retVal;
}