Kevin's Guide to DEXT Signing
The question of "How do I sign a DEXT" comes up a lot, so this post is my attempt to describe both what the issue are and the best current solutions are. So...
The Problems:
When DEXTs were originally introduced, the recommended development signing process required disabling SIP and local signing. There is a newer, much simpler process that's built on Xcode's integrated code-signing support; however, that newer process has not yet been integrated into the documentation library. In addition, while the older flow still works, many of the details it describes are no longer correct due to changes to Xcode and the developer portal.
DriverKit's use of individually customized entitlements is different than the other entitlements on our platform, and Xcode's support for it is somewhat incomplete and buggy. The situation has improved considerably over time, particularly from Xcode 15 and Xcode 16, but there are still issues that are not fully resolved.
To address #1, we introduced "development" entitlement variants of all DriverKit entitlements. These entitlement variants are ONLY available in development-signed builds, but they're available on all paid developer accounts without any special approval. They also allow a DEXT to match against any hardware, greatly simplifying working with development or prototype hardware which may not match the configuration of a final product.
Unfortunately, this also means that DEXT developers will always have at least two entitlement variants (the public development variant and the "private" approved entitlement), which is what then causes the problem I mentioned in #2.
The Automatic Solution:
If you're using Xcode 16 or above, then Xcode's Automatic code sign support will work all DEXT Families, with the exception of distribution signing the PCI and USB Families.
For completeness, here is how that Automatic flow should work:
Change the code signing configuration to "Automatic".
Add the capability using Xcode.
If you've been approved for one of these entitlements, the one oddity you'll see is that adding your approved capability will add both the approved AND the development variant, while deleting either will delete both. This is a visual side effect of #2 above; however, aside from the exception described below, it can be ignored.
Similarly, you can sign distribution builds by creating a build archive and then exporting the build using the standard Xcode flow.
__
Kevin Elliott
DTS Engineer, CoreOS/Hardware
Drivers
RSS for tagUnderstand the role of drivers in bridging the gap between software and hardware, ensuring smooth hardware functionality.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
Note: This document is specifically focused on what happens after a DEXT has passed its initial code-signing checks. Code-signing issues are dealt with in other posts.
Preliminary Guidance:
Using and understanding DriverKit basically requires understanding IOKit, something which isn't entirely clear in our documentation. The good news here is that IOKit actually does have fairly good "foundational" documentation in the documentation archive. Here are a few of the documents I'd take a look at:
IOKit Fundamentals
IOKit Device Driver Design Guidelines
Accessing Hardware From Applications
Special mention to QA1075: "Making sense of IOKit error codes",, which I happened to notice today and which documents the IOReturn error format (which is a bit weird on first review).
Those documents do not cover the full DEXT loading process, but they are the foundation of how all of this actually works.
Understanding the IOKitPersonalities Dictionary
The first thing to understand here is that the "IOKitPersonalities" is called that because it is in fact a fully valid "IOKitPersonalities" dictionary. That is, what the system actually uses that dictionary "for" is:
Perform a standard IOKit match and load cycle in the kernel.
The final driver in the kernel then uses the DEXT-specific data to launch and run your DEXT process outside the kernel.
So, working through the critical keys in that dictionary:
"IOProviderClass"-> This is the in-kernel class that your in-kernel driver loads "on top" of. The IOKit documentation and naming convention uses the term "Nub", but the naming convention is not consistent enough that it applies to all cases.
"IOClass"-> This is the in-kernel class that your driver loads on top of. This is where things can become a bit confused, as some families work by:
Routing all activity through the provider reference so that the DEXT-specific class does not matter (PCIDriverKit).
Having the DEXT subclass a specific subclass which corresponds to a specific kernel driver (SCSIPeripheralsDriverKit).
This distinction is described in the documentation, but it's easy to overlook if you don't understand what's going on. However, compare PCIDriverKit:
"When the system loads your custom PCI driver, it passes an IOPCIDevice object as the provider to your driver. Use that object to read and write the configuration and memory of your PCI hardware."
Versus SCSIPeripheralsDriverKit:
Develop your driver by subclassing IOUserSCSIPeripheralDeviceType00 or IOUserSCSIPeripheralDeviceType05, depending on whether your device works with SCSI Block Commands (SBC) or SCSI Multimedia Commands (SMC), respectively. In your subclass, override all methods the framework declares as pure virtual.
The reason these differences exist actually comes from the relationship and interactions between the DEXT families. Case in point, PCIDriverKit doesn't require a specific subclass because it wants SCSIControllerDriverKit DEXTs to be able to directly load "above" it.
Note that the common mistake many developers make is leaving "IOUserService" in place when they should have specified a family-specific subclass (case 2 above). This is an undocumented implementation detail, but if there is a mismatch between your DEXT driver ("IOUserSCSIPeripheralDeviceType00") and your kernel driver ("IOUserService"), you end up trying to call unimplemented kernel methods. When a method is "missing" like that, the codegen system ends up handling that by returning kIOReturnUnsupported.
One special case here is the "IOUserResources" provider. This class is the DEXT equivalent of "IOResources" in the kernel. In both cases, these classes exist as an attachment point for objects which don't otherwise have a provider. It's specifically used by the sample "Communicating between a DriverKit extension and a client app" to allow that sample to load on all hardware but is not something the vast majority of DEXT will use.
Following on from that point, most DEXT should NOT include "IOMatchCategory". Quoting IOKit fundamentals:
"Important: Any driver that declares IOResources as the value of its IOProviderClass key must also include in its personality the IOMatchCategory key and a private match category value. This prevents the driver from matching exclusively on the IOResources nub and thereby preventing other drivers from matching on it. It also prevents the driver from having to compete with all other drivers that need to match on IOResources. The value of the IOMatchCategory property should be identical to the value of the driver's IOClass property, which is the driver’s class name in reverse-DNS notation with underbars instead of dots, such as com_MyCompany_driver_MyDriver."
The critical point here is that including IOMatchCategory does this:
"This prevents the driver from matching exclusively on the IOResources nub and thereby preventing other drivers from matching on it."
The problem here is that this is actually the exceptional case. For a typical DEXT, including IOMatchCategory means that a system driver will load "beside" their DEXT, then open the provider blocking DEXT access and breaking the DEXT.
DEXT Launching
The key point here is that the entire process above is the standard IOKit loading process used by all KEXT. Once that process finishes, what actually happens next is the DEXT-specific part of this process:
IOUserServerName-> This key is the bundle ID of your DEXT, which the system uses to find your DEXT target.
IOUserClass-> This is the name of the class the system instantiates after launching your DEXT. Note that this directly mimics how IOKit loading works.
Keep in mind that the second, DEXT-specific, half of this process is the first point your actual code becomes relevant. Any issue before that point will ONLY be visible through kernel logging or possibly the IORegistry.
__
Kevin Elliott
DTS Engineer, CoreOS/Hardware
I'm trying to implement a virtual serial port driver for my ham radio projects which require emulating some serial port devices and I need to have a "backend" to translate the commands received by the virtual serial port into some network-based communications. I think the best way to do that is to subclass IOUserSerial? Based on the available docs on this class (https://developer.apple.com/documentation/serialdriverkit/iouserserial), I've done the basic implementation below. When the driver gets loaded, I can see sth like tty.serial-1000008DD in /dev and I can use picocom to do I/O on the virtual serial port. And I see TxDataAvailable() gets called every time I type a character in picocom.
The problems are however, firstly, when TxDataAvailable() is called, the TX buffer is all-zero so although the driver knows there is some incoming data received from picocom, it cannot actually see the data in neither Tx/Rx buffers.
Secondly, I couldn't figure out how to notify the system that there are data available for sending back to picocom. I call RxDataAvailable(), but nothing appears on picocom, and RxFreeSpaceAvailable() never gets called back. So I think I must be doing something wrong somewhere. Really appreciate it if anyone could point out how should I fix it, many thanks!
VirtualSerialPortDriver.cpp:
constexpr int bufferSize = 2048;
using SerialPortInterface = driverkit::serial::SerialPortInterface;
struct VirtualSerialPortDriver_IVars
{
IOBufferMemoryDescriptor *ifmd, *rxq, *txq;
SerialPortInterface *interface;
uint64_t rx_buf, tx_buf;
bool dtr, rts;
};
bool VirtualSerialPortDriver::init()
{
bool result = false;
result = super::init();
if (result != true)
{
goto Exit;
}
ivars = IONewZero(VirtualSerialPortDriver_IVars, 1);
if (ivars == nullptr)
{
goto Exit;
}
kern_return_t ret;
ret = ivars->rxq->Create(kIOMemoryDirectionInOut, bufferSize, 0, &ivars->rxq);
if (ret != kIOReturnSuccess) {
goto Exit;
}
ret = ivars->txq->Create(kIOMemoryDirectionInOut, bufferSize, 0, &ivars->txq);
if (ret != kIOReturnSuccess) {
goto Exit;
}
IOAddressSegment ioaddrseg;
ivars->rxq->GetAddressRange(&ioaddrseg);
ivars->rx_buf = ioaddrseg.address;
ivars->txq->GetAddressRange(&ioaddrseg);
ivars->tx_buf = ioaddrseg.address;
return true;
Exit:
return false;
}
kern_return_t
IMPL(VirtualSerialPortDriver, HwActivate)
{
kern_return_t ret;
ret = HwActivate(SUPERDISPATCH);
if (ret != kIOReturnSuccess) {
goto Exit;
}
// Loopback, set CTS to RTS, set DSR and DCD to DTR
ret = SetModemStatus(ivars->rts, ivars->dtr, false, ivars->dtr);
if (ret != kIOReturnSuccess) {
goto Exit;
}
Exit:
return ret;
}
kern_return_t
IMPL(VirtualSerialPortDriver, HwDeactivate)
{
kern_return_t ret;
ret = HwDeactivate(SUPERDISPATCH);
if (ret != kIOReturnSuccess) {
goto Exit;
}
Exit:
return ret;
}
kern_return_t
IMPL(VirtualSerialPortDriver, Start)
{
kern_return_t ret;
ret = Start(provider, SUPERDISPATCH);
if (ret != kIOReturnSuccess) {
return ret;
}
IOMemoryDescriptor *rxq_, *txq_;
ret = ConnectQueues(&ivars->ifmd, &rxq_, &txq_, ivars->rxq, ivars->txq, 0, 0, 11, 11);
if (ret != kIOReturnSuccess) {
return ret;
}
IOAddressSegment ioaddrseg;
ivars->ifmd->GetAddressRange(&ioaddrseg);
ivars->interface = reinterpret_cast<SerialPortInterface*>(ioaddrseg.address);
SerialPortInterface &intf = *ivars->interface;
ret = RegisterService();
if (ret != kIOReturnSuccess) {
goto Exit;
}
TxFreeSpaceAvailable();
Exit:
return ret;
}
void
IMPL(VirtualSerialPortDriver, TxDataAvailable)
{
SerialPortInterface &intf = *ivars->interface;
// Loopback
// FIXME consider wrapped case
size_t tx_buf_sz = intf.txPI - intf.txCI;
void *src = reinterpret_cast<void *>(ivars->tx_buf + intf.txCI);
// char src[] = "Hello, World!";
void *dest = reinterpret_cast<void *>(ivars->rx_buf + intf.rxPI);
memcpy(dest, src, tx_buf_sz);
intf.rxPI += tx_buf_sz;
RxDataAvailable();
intf.txCI = intf.txPI;
TxFreeSpaceAvailable();
Log("[TX Buf]: %{public}s", reinterpret_cast<char *>(ivars->tx_buf));
Log("[RX Buf]: %{public}s", reinterpret_cast<char *>(ivars->rx_buf));
// dmesg confirms both buffers are all-zero
Log("[TX] txPI: %d, txCI: %d, rxPI: %d, rxCI: %d, txqoffset: %d, rxqoffset: %d, txlogsz: %d, rxlogsz: %d",
intf.txPI, intf.txCI, intf.rxPI, intf.rxCI, intf.txqoffset, intf.rxqoffset, intf.txqlogsz, intf.rxqlogsz);
}
void
IMPL(VirtualSerialPortDriver, RxFreeSpaceAvailable)
{
Log("RxFreeSpaceAvailable() called!");
}
kern_return_t IMPL(VirtualSerialPortDriver,HwResetFIFO){
Log("HwResetFIFO() called with tx: %d, rx: %d!", tx, rx);
kern_return_t ret = kIOReturnSuccess;
return ret;
}
kern_return_t IMPL(VirtualSerialPortDriver,HwSendBreak){
Log("HwSendBreak() called!");
kern_return_t ret = kIOReturnSuccess;
return ret;
}
kern_return_t IMPL(VirtualSerialPortDriver,HwProgramUART){
Log("HwProgramUART() called, BaudRate: %u, nD: %d, nS: %d, P: %d!", baudRate, nDataBits, nHalfStopBits, parity);
kern_return_t ret = kIOReturnSuccess;
return ret;
}
kern_return_t IMPL(VirtualSerialPortDriver,HwProgramBaudRate){
Log("HwProgramBaudRate() called, BaudRate = %d!", baudRate);
kern_return_t ret = kIOReturnSuccess;
return ret;
}
kern_return_t IMPL(VirtualSerialPortDriver,HwProgramMCR){
Log("HwProgramMCR() called, DTR: %d, RTS: %d!", dtr, rts);
ivars->dtr = dtr;
ivars->rts = rts;
kern_return_t ret = kIOReturnSuccess;
Exit:
return ret;
}
kern_return_t IMPL(VirtualSerialPortDriver, HwGetModemStatus){
*cts = ivars->rts;
*dsr = ivars->dtr;
*ri = false;
*dcd = ivars->dtr;
Log("HwGetModemStatus() called, returning CTS=%d, DSR=%d, RI=%d, DCD=%d!", *cts, *dsr, *ri, *dcd);
kern_return_t ret = kIOReturnSuccess;
return ret;
}
kern_return_t IMPL(VirtualSerialPortDriver,HwProgramLatencyTimer){
Log("HwProgramLatencyTimer() called!");
kern_return_t ret = kIOReturnSuccess;
return ret;
}
kern_return_t IMPL(VirtualSerialPortDriver,HwProgramFlowControl){
Log("HwProgramFlowControl() called! arg: %u, xon: %d, xoff: %d", arg, xon, xoff);
kern_return_t ret = kIOReturnSuccess;
Exit:
return ret;
}
I'm working on a DriverKit driver. I have it running on macOS, including a very simple client app written in SwiftUI. Everything is working fine there. I've added iPadOS as a destination for the app as demonstrated in the WWDC video on DriverKit for iPadOS. The app builds and runs on my iPad, as expected (after a little work to conditionalize out my use of SystemExtensions.framework for installation on macOS). However, after installing and running the app on an iPad, the driver does not show up in Settings->General, nor in the app-specific settings pane triggered by the inclusion of a settings bundle in the app.
I've confirmed that the dext is indeed being included in the app bundle when built for iPadOS (in MyApp.app/SystemExtensions/com.me.MyApp.MyDriver.dext). I also can see in the build log that there's a validation step for the dext, and that seems to be succeeding.
I don't know why the app isn't being discovered -- or in any case surfaced to the user -- when the app is installed on the iPad. Has anyone faced this problem and solved it? Are there ways to troubleshoot installation/discovery of an embedded DriverKit extensions on iOS? Unlike on macOS, I don't really see any relevant console messages.
How does VMWare access USB devices without have any specifics of the USB device? Does it use the same profile/entitlement process or does it take a different approach?
Hello forum, I'm trying to build communications between a non-MFi HID device (say, a keyboard with a USB-C port) and an iOS device over a MFi-licensed cable with Swift, what framework would you suggest?
The USB-C cable is MFi-licensed.
The keyboard is not MFi-licensed.
Topic:
App & System Services
SubTopic:
Drivers
Investigating a kernel panic, I discovered that Apple Silicon Panic traces are not working with how I know to symbolicate the panic information. I have not found proper documentation that corrects this situation.
Attached file is an indentity-removed panic, received from causing an intentional panic (dereferencing nullptr), so that I know what functions to expect in the call stack. This is cut-and-pasted from the "Report To Apple" dialog that appears after the reboot:
panic_1_4_21_b.txt
To start, I download and install the matching KDK (in this case KDK_14.6.1_23G93.kdk), identified from this line:
OS version: 23G93
Kernel version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:04 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T8122
Then start lldb from Terminal, using this command:
bash_prompt % lldb -arch arm64e /Library/Developer/KDKs/KDK_14.6.1_23G93.kdk/System/Library/Kernels/kernel.release.t8122
Next I load the remaining scripts per the instructions from lldb:
(lldb) settings set target.load-script-from-symbol-file true
I need to know what address to load my kext symbols to, which I read from this line of the panic log, after the @ symbol:
com.company.product(1.4.21d119)[92BABD94-80A4-3F6D-857A-3240E4DA8009]@0xfffffe001203bfd0->0xfffffe00120533ab
I am using a debug build of my kext, so the DWARF symbols are part of the binary. I use this line to load the symbols into the lldb session:
(lldb) addkext -F /Library/Extensions/KextName.kext/Contents/MacOS/KextName 0xfffffe001203bfd0
And now I should be able to use lldb image lookup to identify pointers on the stack that land within my kext. For example, the current PC at the moment of the crash lands within the kext (expected, because it was intentional):
(lldb) image lookup -a 0xfffffe001203fe10
Which gives the following incorrect result:
Address: KextName[0x0000000000003e40] (KextName.__TEXT.__cstring + 14456)
Summary: "ffer has %d retains\n"
That's not even a program instruction - that's within a cstring. No, that cstring isn't involved in anything pertaining to the intentional panic I am expecting to see.
Can someone please explain what I'm doing wrong and provide instructions that will give symbol information from a panic trace on an Apple Silicon Mac?
Disclaimers:
Yes I know IOPCIFamily is deprecated, I am in process of transitioning to DriverKit Dext from IOKit kext. Until then I must maintain the kext.
Terminal command "atos" provides similar incorrect results, and seems to not work with debug-built-binaries (only dSYM files)
Yes this is an intentional panic so that I can verify the symbolicate process before I move on to investigating an unexpected panic
I have set nvram boot-args to include keepsyms=1
I have tried (lldb) command script import lldb.macosx but get a result of error: no images in crash log (after the nvram settings)
We are looking for a solution (API, Frameworks) that would allow us to block any type of external device, including storage devices, HIDs, network adapters, and Bluetooth devices according with dynamic rules that comes from management server . This feature is important for endpoint security solutions vendors, and it can be implemented on other platforms and older versions of macOS using the IOKit framework and kexts.
I have found one solution that can control the usage only of "storage" devices with the EndpointSecurity framework in conjunction with the DiskArbitration framework. This involves monitoring the MOUNT and OPEN events for /dev/disk files, checking for devices as they appear, and ejecting them if they need to be blocked.. Also, I have found the ES_EVENT_TYPE_AUTH_IOKIT_OPEN event in EndpointSecurity.framework, but it doesn't seem to be useful, at least not for my purposes, because ES doesn't provide AUTH events for some system daemons, such as configd (it only provides NOTIFY events). Furthermore, there are other ways to communicate with devices and their drivers apart from IOKit.
DriverKit.framework does not provide the necessary functionality either, as it requires specific entitlements that are only available to certain vendors and devices. Therefore, it cannot be used to create universal drivers for all devices, which should be blocked.
Any advice would be greatly appreciated!
Topic:
App & System Services
SubTopic:
Drivers
I would like to write a driver that supports our custom USB-C connected device, which provides a serial port interface. USBSerialDriverKit looks like the solution I need. Unfortunately, without a decent sample, I'm not sure how to accomplish this. The DriverKit documentation does a good job of telling me what APIs exist but it is very light on semantic information and details about how to use all of these API elements. A function call with five unexplained parameters just is that useful to me.
Does anyone have or know of a resource that can help me figure out how to get started?
I am trying to add a few properties to an IOUSBHostDevice but the SetProperties is returning kIOReturnUnsupported. The reason I am trying to modify the IOUSBHostDevice's properties is so we can support a MacBook Air SuperDrive when it is attached to our docking station devices. The MacBook Air SuperDrive needs a high powered port to run and this driver will help the OS realize that our dock can support it.
I see that the documentation for SetProperties says:
The default implementation of this method returns kIOReturnUnsupported. You can override this method and use it to modify the set of properties and values as needed. The changes you make apply only to the current service.
Do I need to override IOUSBHostDevice? This is my current Start implementation (you can also see if in the Xcode project):
kern_return_t
IMPL(MyUserUSBHostDriver, Start)
{
kern_return_t ret = kIOReturnSuccess;
OSDictionary * prop = NULL;
OSDictionary * mergeProperties = NULL;
bool success = true;
os_log(OS_LOG_DEFAULT, "> %s", __FUNCTION__);
os_log(OS_LOG_DEFAULT, "%s:%d", __FUNCTION__, __LINE__);
ret = Start(provider, SUPERDISPATCH);
__Require(kIOReturnSuccess == ret, Exit);
os_log(OS_LOG_DEFAULT, "%s:%d", __FUNCTION__, __LINE__);
ivars->host = OSDynamicCast(IOUSBHostDevice, provider);
__Require_Action(NULL != ivars->host, Exit, ret = kIOReturnNoDevice);
os_log(OS_LOG_DEFAULT, "%s:%d", __FUNCTION__, __LINE__);
ret = ivars->host->Open(this, 0, 0);
__Require(kIOReturnSuccess == ret, Exit);
os_log(OS_LOG_DEFAULT, "%s:%d", __FUNCTION__, __LINE__);
ret = CopyProperties(&prop);
__Require(kIOReturnSuccess == ret, Exit);
__Require_Action(NULL != prop, Exit, ret = kIOReturnError);
os_log(OS_LOG_DEFAULT, "%s:%d", __FUNCTION__, __LINE__);
mergeProperties = OSDynamicCast(OSDictionary, prop->getObject("IOProviderMergeProperties"));
mergeProperties->retain();
__Require_Action(NULL != mergeProperties, Exit, ret = kIOReturnError);
os_log(OS_LOG_DEFAULT, "%s:%d", __FUNCTION__, __LINE__);
OSSafeReleaseNULL(prop);
ret = ivars->host->CopyProperties(&prop);
__Require(kIOReturnSuccess == ret, Exit);
__Require_Action(NULL != prop, Exit, ret = kIOReturnError);
os_log(OS_LOG_DEFAULT, "%s:%d", __FUNCTION__, __LINE__);
os_log(OS_LOG_DEFAULT, "%s : %s", "USB Product Name", ((OSString *) prop->getObject("USB Product Name"))->getCStringNoCopy());
os_log(OS_LOG_DEFAULT, "%s : %s", "USB Vendor Name", ((OSString *) prop->getObject("USB Vendor Name"))->getCStringNoCopy());
os_log(OS_LOG_DEFAULT, "%s:%d", __FUNCTION__, __LINE__);
success = prop->merge(mergeProperties);
__Require_Action(success, Exit, ret = kIOReturnError);
os_log(OS_LOG_DEFAULT, "%s:%d", __FUNCTION__, __LINE__);
ret = ivars->host->SetProperties(prop); // this is no working
__Require(kIOReturnSuccess == ret, Exit);
Exit:
OSSafeReleaseNULL(mergeProperties);
OSSafeReleaseNULL(prop);
os_log(OS_LOG_DEFAULT, "err ref %d", kIOReturnUnsupported);
os_log(OS_LOG_DEFAULT, "< %s %d", __FUNCTION__, ret);
return ret;
}
I have been working on a multi-platform multi-touch HID-standard digitizer clickpad device.
The device uses Bluetooth Low Energy (BLE) as its connectivity transport and advertises HID over GATT. To date, I have the device working successfully on Windows 11 as a multi-touch, gesture-capable click pad with no custom driver or app on Windows.
However, I have been having difficulty getting macOS to recognize and react to it as a HID-standard multi-touch click pad digitizer with either the standard Apple HID driver (AppleUserHIDEventDriver) or with a custom-coded driver extension (DEXT) modeled, based on the DTS stylus example and looking at the IOHIDFamily open source driver(s).
The trackpad works with full-gesture support on Windows 11 and the descriptors seem to be compliant with the R23 Accessory Guidelines document, §15.
With the standard, matching Apple AppleUserHIDEventDriver HID driver, when enumerating using stock-standard HID mouse descriptors, the device works fine on macOS 14.7 "Sonoma" as a relative pointer device with scroll wheel capability (two finger swipe generates a HID scroll report) and a single button.
With the standard, matching Apple AppleUserHIDEventDriver HID driver, when enumerating using stock-standard HID digitizer click/touch pad descriptors (those same descriptors used successfully on Windows 11), the device does nothing. No button, no cursor, no gestures, nothing. Looking at ioreg -filtb, all of the key/value pairs for the driver match look correct.
Because, even with the Apple open source IOHIDFamily drivers noted above, we could get little visibility into what might be going wrong, I wrote a custom DriverKit/HIDDriverKit driver extension (DEXT) (as noted above, based on the DTS HID stylus example and the open source IOHIDEventDriver.
With that custom driver, I can get a single button click from the click pad to work by dispatching button events to dispatchRelativePointerEvent; however, when parsing, processing, and dispatching HID digitizer touch finger (that is, transducer) events via IOUserHIDEventService::dispatchDigitizerTouchEvent, nothing happens.
If I log with:
% sudo log stream --info --debug --predicate '(subsystem == "com.apple.iohid")'
either using the standard AppleUserHIDEventDriver driver or our custom driver, we can see that our input events are tickling the IOHIDNXEventTranslatorSessionFilter HID event filter, so we know HID events are getting from the device into the macOS HID stack. This was further confirmed with the DTS Bluetooth PacketLogger app. Based on these events flowing in and hitting IOHIDNXEventTranslatorSessionFilter, using the standard AppleUserHIDEventDriver driver or our custom driver, clicks or click pad activity will either wake the display or system from sleep and activity will keep the display or system from going to sleep.
In short, whether with the stock driver or our custom driver, HID input reports come in over Bluetooth and get processed successfully; however, nothing happens—no pointer movement or gesture recognition.
STEPS TO REPRODUCE
For the standard AppleUserHIDEventDriver:
Pair the device with macOS 14.7 "Sonoma" using the Bluetooth menu.
Confirm that it is paired / bonded / connected in the Bluetooth menu.
Attempt to click or move one or more fingers on the touchpad surface.
Nothing happens.
For the our custom driver:
Pair the device with macOS 14.7 "Sonoma" using the Bluetooth menu.
Confirm that it is paired / bonded / connected in the Bluetooth menu.
Attempt to click or move one or more fingers on the touchpad surface.
Clicks are correctly registered. With transducer movement, regardless of the number of fingers, nothing happens.
=1) The situation:
1A) I make both a "DExt" and a "SDK" for still-imaging-USB-gadgets and MACOS>=14 ,iPADOS>=17
1B) One of the USB-gadgets needs warm_up after PlugIn (i.e End-User-App must know "now-TheMomentOfPlugIn" with precision ~1sec).
=2) The question is how to do "1B" rationally?
=3) My speculative guess: in BSD-descendant I expect existence (somewhere) of a "normal file" through "macports etc", which has normal "file creation time". Such a "file creation time" (accessible better via IORegistryEntry... at SDK-level; possibly via IOUSBHostInterface at DExt-level) is cognitive target of mine.
=4) Additional constraints: Technically absent. I freely modify code either DExt (descendant of IOUSBHostInterface) or SDK-level (IORegistryEntryGetRegistryEntryID, IORegistryEntry...)
Topic:
App & System Services
SubTopic:
Drivers
Trying to use IOLog to print out a message from a dext. When I try to use IOLog, I get , though I did not or thought I did not tag it as private. I have tried to update the info.plist file for the dext according to https://developer.apple.com/forums/thread/705810, but that has not helped, or perhaps I am not defining it correctly since it's a dext. Anyone else had this issue, and how did you fix it?
Hi there. I inadvertently deleted the Passwords app. The App Store is telling me restrictions are enabled so I can’t reinstall from the cloud. Not sure where to go from here. Help.
Topic:
App & System Services
SubTopic:
Drivers
I have USB DriverKit driver. When I use the log command below to get log, there is logs from my driver on my own M-series MacBook where the driver is built using developer account.
log stream | grep CompanyName
But on other mac like (M-series) Mac Mini, there is no log captured from driver though the driver is communicating with the machine correctly. The only log captured are from MacOS regarding CompanyName driver status/unload/load. The MacOS is Sonoma 14.7.2 and 14.7.3.
Please advise on how to get log from driver since writing to files is not allowed in DriverKit. I need logs to troubleshoot on Mac Mini.
Thanks.
I have an app that captures USB storage device and sends some commands to it. The app has a privilege helper tool which captures the USB device. Everything was working fine upto macOS 15.2 but it 15.3 update broke the functionality.
When the helper tool tries to capture the USB device, it is able to capture IOUSBHostDevice but fails to capture IOUSBHostInterface. The error is
Code: 3758097097; Domain: IOUSBHostErrorDomain; Description: Failed to create IOUSBHostInterface.; Reason: Failed [super init]
I have verified the UID, EUID, GID, EGID = 0 for the helper process. So by IOUSBHost documentation it should have worked. The code that cause the error inside the helper tool is
func captureUSBInterface(interface: io_service_t) -> IOUSBHostInterface? {
let queue = DispatchQueue(label: "com.example.usbdevice.queue2")
var capturedInterface: IOUSBHostInterface?
do {
capturedInterface = try IOUSBHostInterface(__ioService: interface, options: .deviceCapture, queue: queue, interestHandler: nil)
} catch {
NSLog("Failed to capture USB interface: \(error)")
return nil
}
return capturedInterface
}
The app has sandbox=False and is distributed outside of the App Store.
Please advise (long-term, short-term solutions) on how to make this work.
We are developing an iOS app that communicates with a device using an NXP NTAG 5 chip in ISO15693 pass-through mode. While the app works flawlessly on older iPhone models (iPhone 8, SE, X) and most Android devices, we are experiencing severe reliability issues on iPhone 12, 13, 14, and 15.
Issue Summary
On newer iPhones (12–15), 90% of communication attempts fail.
Retry strategies do not work, as the NFC session is unexpectedly canceled while handling CoreNFC custom commands.
The issue is not consistent—sometimes all requests fail immediately, while other times, a batch of reads might succeed unexpectedly before failing again.
Technical Details
The failure occurs while executing the following request, which should return 256 bytes:
tag.customCommand(requestFlags: .highDataRate, customCommandCode: commandCode, customRequestParameters: Data(byteArray)) { (responseData, error) in
}
The returned error is:
-[NFCTagReaderSession transceive:tagUpdate:error:]:897 Error Domain=NFCError Code=100 "Tag connection lost" UserInfo={NSLocalizedDescription=Tag connection lost}
For reference, we tested a comparable STM ST25 chip in ISO15693 and NDEF mode, and the exact same issue occurs.
Observations and Debugging Attempts
Positioning of the NFC antenna has been tested extensively.
Disabling Bluetooth and Wi-Fi does not improve reliability.
Rebooting the device or waiting between attempts sometimes improves success rates but does not provide a structural fix.
When reading multiple blocks (e.g., 15 blocks of 256 bytes each):
The process often fails within the first three blocks.
After multiple failures, it may suddenly succeed in reading all blocks in one go before returning to a series of failures.
The nfcd logs suggest issues at the low-level NFC and SPMI layers, indicating potential hardware or firmware-related problems:
error 17:36:18.289099+0100 nfcd phOsalNfc_LogStr:65 NCI DATA RSP : Timer expired before data is received!
error 17:36:18.292936+0100 nfcd NFHardwareSerialQuerySPMIError:1339 "Invalid argument" errno=22 setsockopt: SYSPROTO_CONTROL:IO_STOCKHOLM_SPMIERRORS
error 17:36:18.293036+0100 nfcd phTmlNfc_SpmiDrvErrorStatus:1157 "Invalid argument" errno=22 Failed to query SPMI error registers
error 17:36:18.293235+0100 nfcd phOsalNfc_LogStr:65 phLibNfc_SpmiStsRegInfoNtfHandler: Read Spmi Status Failed - pInfo set to NULL
error 17:36:18.293313+0100 nfcd _Callback_NFDriverNotifyGeneral:2353 Unknown notification: 0x5b
error 17:36:18.294163+0100 nfcd phOsalNfc_LogStr:65 Target Lost!!
error 17:36:18.294678+0100 nfcd -[_NFReaderSession handleSecureElementTransactionData:appletIdentifier:]:164 Unimplemented
error 17:36:18.294760+0100 nfcd -[_NFReaderSession handleSecureElementTransactionData:appletIdentifier:]:164 Unimplemented
error 17:36:18.320132+0100 nfcd phOsalNfc_LogStr:65 ISO15693 XchgData,PH_NCINFC_STATUS_RF_FRAME_CORRUPTED Detected by NFCC during Data Exchange
error 17:36:18.320291+0100 nfcd phOsalNfc_LogU32:74 phNciNfc_ChkDataRetransmission: Re-transmitting Data pkt Attempt..=1
error 17:36:18.622050+0100 nfcd phOsalNfc_LogStr:65 NCI DATA RSP : Timer expired before data is received!
error 17:36:18.625857+0100 nfcd NFHardwareSerialQuerySPMIError:1339 "Invalid argument" errno=22 setsockopt: SYSPROTO_CONTROL:IO_STOCKHOLM_SPMIERRORS
error 17:36:18.625919+0100 nfcd phTmlNfc_SpmiDrvErrorStatus:1157 "Invalid argument" errno=22 Failed to query SPMI error registers
error 17:36:18.626132+0100 nfcd phOsalNfc_LogStr:65 phLibNfc_SpmiStsRegInfoNtfHandler: Read Spmi Status Failed - pInfo set to NULL
error 17:36:18.626182+0100 nfcd _Callback_NFDriverNotifyGeneral:2353 Unknown notification: 0x5b
error 17:36:18.626899+0100 nfcd phOsalNfc_LogStr:65 Target Lost!!
error 17:36:18.627482+0100 nfcd -[_NFReaderSession handleSecureElementTransactionData:appletIdentifier:]:164 Unimplemented
error 17:36:18.627568+0100 nfcd -[_NFReaderSession handleSecureElementTransactionData:appletIdentifier:]:164 Unimplemented
error 17:36:18.833174+0100 nfcd -[_NFReaderSession handleSecureElementTransactionData:appletIdentifier:]:164 Unimplemented
error 17:36:19.145289+0100 nfcd phOsalNfc_LogStr:65 NCI DATA RSP : Timer expired before data is received!
error 17:36:19.149233+0100 nfcd NFHardwareSerialQuerySPMIError:1339 "Invalid argument" errno=22 setsockopt: SYSPROTO_CONTROL:IO_STOCKHOLM_SPMIERRORS
error 17:36:19.149353+0100 nfcd phTmlNfc_SpmiDrvErrorStatus:1157 "Invalid argument" errno=22 Failed to query SPMI error registers
error 17:36:19.149730+0100 nfcd phOsalNfc_LogStr:65 phLibNfc_SpmiStsRegInfoNtfHandler: Read Spmi Status Failed - pInfo set to NULL
error 17:36:19.149797+0100 nfcd _Callback_NFDriverNotifyGeneral:2353 Unknown notification: 0x5b
error 17:36:19.150463+0100 nfcd phOsalNfc_LogStr:65 Target Lost!!
Any solutions?
Has anyone else encountered similar behavior with CoreNFC on iPhone 12–15? Could this be related to changes in NFC hardware or power management in newer iPhone models? Any suggestions on possible workarounds or alternative approaches would be greatly appreciated.
I read that iPadOS supports driverkit, and, presumably, the same serial FTDI UARTs as macOS.
Has this been migrated to USB-C iPhones on iOS 18?
After some searching, the developer doc is not clear, and web responses are contradictory.
We are currently using it for a wired sensor option of our BlueTooth HR sensor. When it is used in wired config, the radios are turned off. This is important to some of our customers. Since Lightning MFI sensors are being discontinued with Apple killing Lightning, we would love to have an alternative for iOS.
-- Harald
Hello Everyone,
I am trying to develop a DriverKit for RAID system, using PCIDriverKit & SCSIControllerDriverKit framework. The driver can detect the Vendor ID and Device ID. But before communicating to the RAID system, I would like to simulate a virtual Volume using a memory block to talk with macOS.
In the UserInitializeController(), I allocated a 512K memory for a IOBufferMemoryDescriptor* volumeBuffer, but fail to use Map() to map memory for volumeBuffer.
result = ivars->volumeBuffer->Map(
0, // Options: Use default
0, // Offset: Start of the buffer
ivars->volumeSize, // Length: Must not exceed buffer size
0, // Flags: Use default
nullptr, // Address space: Default address space
&mappedAddress // Output parameter
);
Log("Memory mapped completed at address: 0x%llx", mappedAddress); // this line never run
The Log for Map completed never run, just restart to run the Start() and makes this Driver re-run again and again, in the end, the driver eat out macOS's memory and system halt.
Are the parameters for Map() error? or I should not put this code in UserInitializeController()?
Any help is appreciated!
Thanks in advance.
Charles
Hello every one good day :)
My project uses a mouse driver handling all events from the mouse produced by our company. In the past the driver is a kext, which implement acceleration by HIDPointerAccelerationTable, we prepare data in the driver's info.plist, while our app specifies a value to IOHIDSystem with key kIOHIDPointerAccelerationKey, the driver will call copyAccelerationTable() to lookup the HIDPointerAccelerationTable and return a value.
In current DriverKit area, the process above is deprecated. Now I don't know to do. I've read some document:
https://developer.apple.com/documentation/hiddriverkit/iohidpointereventoptions/kiohidpointereventoptionsnoacceleration?changes=__7_8
https://developer.apple.com/documentation/hiddriverkit/kiohidmouseaccelerationtypekey?changes=__7_8
https://developer.apple.com/documentation/hiddriverkit/kiohidpointeraccelerationkey?changes=__7_8
but no any description in those articles. Please help!
I'm trying to iterate through a USB device but the iterator is always empty or contains only the matched interface:
Single interface in Iterator
This happens when my driver matches against the interface. Because I need to use 2 interfaces (control and cdc), I try to open the IOUSBHostDevice (copied from the interface) and iterate through the rest, but I only get the interface my dext matched with.
Empty Iterator
I decided to match against USB communication devices, thinking things would be different. However, this time the interface iterator is completely empty (provider is IOUSBHostDevice).
Here's a snippet of my code before iterating with IOUSBHostDevice->CopyInterface():
// teardown the configured interfaces.
result = device->SetConfiguration(ivars->Config, true);
__Require_noErr_Action(result, _failure_Out,
ELOG("IOUSBHostDevice::SetConfiguration failed 0x%x", result));
// open usb device
result = device->Open(this, 0, 0);
__Require_noErr_Action(result, _failure_Out,
ELOG("Failed to open IOUSBHostDevice"));
// Get interface iterator
result = device->CreateInterfaceIterator(&iterRef);
__Require_noErr_Action(result, _failure_Out,
ELOG("IOUSBHostDevice::CreateInterfaceIterator failed failed: 0x%x", result));
Hello Everyone,
I am trying to create a Fake SCSI target based on SCSIControllerDriverKit.framework and inherent from IOUserSCSIParallelInterfaceController, here is the code
kern_return_t IMPL(DRV_MAIN_CLASS_NAME, Start)
{
...
// Programmatically create a null SCSI Target
SCSIDeviceIdentifier nullTargetID = 0; // Example target ID, adjust as needed
ret = UserCreateTargetForID(nullTargetID, nullptr);
if (ret != kIOReturnSuccess) {
Log("Failed to create Null SCSI Target for ID %llu", nullTargetID);
return ret;
}
...
}
According the document UserCreateTargetForID, after creating a TargetID successfully, the framework will call the UserInitializeTargetForID()
The document said:
As part of the UserCreateTargetForID call, the kernel calls several APIs like UserInitializeTargetForID which run on the default dispatch queue of the dext.
But after UserCreateTargetForID created, why the UserInitializeTargetForID() not be invoked automatically?
Here is the part of log show
init() - Start
init() - End
Start() - Start
Start() - try 1 times
UserCreateTargetForID() - Start
Allocating resources for Target ID 0
UserCreateTargetForID() - End
Start() - Finished.
UserInitializeController() - Start
- PCI vendorID: 0x14d6, deviceID: 0x626f.
- BAR0: 0x1, BAR1: 0x200004.
- GetBARInfo() - BAR1 - MemoryIndex: 0, Size: 262144, Type: 0.
UserInitializeController() - End
UserStartController() - Start
- msiInterruptIndex : 0x00000000
- interruptType info is 0x00010000
- PCI Dext interrupt final value, return status info is 0x00000000
UserStartController() - End
Any assistance would be greatly appreciated!
Thank you in advance for your support.
Best regards, Charles