Post

Replies

Boosts

Views

Activity

USB drive invisible to our app on supervised iPad
We have an iPad app which can write to user-specified locations on USB-connected storage devices. On unmanaged devices, this works just fine. However, when the device is under MDM, although the Files app can see the external USB storage device, it does not show up in the file browser in our own app. There's a restriction called "allowFilesUSBDriveAccess" which is set to true (the default), but there's no restriction called "allowOtherAppsUSBDriveAccess". Are MDM-managed iPads simply not allowed to access USB drives (except through the Files app)?
2
0
981
Aug ’24
DriverKit target built as dependency, header not found
I have an Xcode project with a main app target, and a dext target which builds a DriverKit driver which is embedded in the main app. That all works, if I build the DriverKit target first, then switch to the app target and build that. The app and the driver work. If I make the Driver target a dependency of the App target, building the Driver fails because a header is not found, thus building the app fails. This doesn't make much sense - why does building a target as a consequence of dependence on another target produce a different result from building the same target manually? Has anyone else seen behavior like this? Have any hints on how to fix it? I've tried comparing the detailed build logs, but they don't shed much light - the lines are very long and the build steps appear to be executed in a different order. One strange thing I notice is that although I am building on an M1 Mac, with "build active architectures only" set to YES for both targets, in the Driver-target-only case, Driver.cpp is compiled for arm64, while in the failure case, Driver.cpp is compiled for x86_64. That doesn't make any sense to me either.
1
0
648
Sep ’24
Swift Testing environment differences from regular executable
I am working on a Swift package which uses CoreAudio, and includes some tests in a testTarget which use the Testing framework, and a couple of executableTarget targets which exercise the same code. I'm using Xcode 16.2 on macOS 15.3.1. One of the things I do in the test code is create a HAL plugin, then find that plugin using the kAudioHardwarePropertyTranslateUIDToDevice. Finding the plugin that I just created always fails from within a Swift Testing test, unless I run the test which creates the plugin individually first, then separately, run the test which finds the plugin, by clicking on the little arrows next to the function names. If I put the tests in a serialized suite (so creation always happens first, then finding), running the suite always fails - it creates the plugin, but can't find it. If I run the 'find my plugin' test again manually, it is always found. If I call the same functions from a regular executable (the thing created by a "executableTarget" in my .package.swift file), the just-created plugin is always found. Is there a way to mimic the runtime environment of a regular executable in a Swift Testing target, or am I misunderstanding something? this my be related to this issue: https://github.com/swiftlang/swift/issues/76882 but I don't understand it well enough to be sure.
4
0
483
Feb ’25
PCIDriverKit entitlements during development
I'm trying to help out one of our vendors by building a skeleton PCI dext which they can flesh out. However, I can't seem to get the signing right. I can't sign it at all using no team or my personal team. "Signing for requires a development team", and "Personal development teams ... do not support the System Extension capability". I can't sign the driver because "DriverKit Team Provisioning Profile: doesn't match the entitlements file's value for the com.apple.developer.driverkit.transport.pci entitlement. I think this problem occurs because our company has already been assigned a transport.pci entitlement, but for our own PCI vendor ID. But I want to build and test software that works with our vendor's PCI device. I tried generating a profile for the driver manually, it contained only our own company's PCI driver match: IOPCIPrimaryMatch = "0x0000MMMM&0x0000FFFF"; where MMMM is our own PCI vendor ID. Is there a better way to inspect the profile Xcode is using than the postage-stamped sized info popup which truncates the information? I would download the generated profile but it doesn't appear on the profile, but Xcode is accessing it from somewhere. When I look at the available capabilities I can add to an app identifier on the Developer portal, I see com.apple.developer.driverkit.transport.usb, which is "development only". There's no "development only" capability for PCI. Does this mean it isn't possible to develop even a proof-of-concept PCI driver without being first granted the DriverKit PCI (Primary Match) entitlement? When adding capabilities to a driver, the list of available capabilities shown in Xcode has one "DriverKit PCI (Primary Match) entry", but if I double click it, two such entries appear in the Signing and Capabilities tab for my driver target. On the Developer portal, when I look at my driver's Identifier, there are two Capabilities labelled DriverKit PCI (Primary Match). Why?
7
0
1.1k
Oct ’25
How do I use IOUserSCSIPeripheralDeviceType00?
I am having similar problems to this guy on Stack Overflow over a year ago: https://stackoverflow.com/questions/77627852/functions-of-iouserscsiperipheraldevicetype00-class-in-scsiperipheralsdriverkit There are also a few questions on this forum about this object, none of which have answers. I can get my driver to match and instantiate, but nobody calls my UserDetermineDeviceCharacteristics (which does nothing, just returns kIOReturnSuccess) I can attempt to call UserSuspendServices(), UserResumeServices() or UserReportMediumBlockSize() and all of them return kIOReturnUnsupported. It doesn't matter if I've unmounted the disk or not. Is the custom driver supposed to be instantiated beside the kernel's IOSCSIPeripheralDeviceType00, or should it replace it? What should its IOProviderClass be? What should its IOClass be - IOUserService, or something else? see FB19678139 and FB19677920
5
0
160
Sep ’25
Should UserSendCBD work on UAS interfaces?
The device I am trying to develop a firmware updater for is an NVMe drive with a USB4 interface. It can connect in USB4 mode (tunneled NVMe), in USB 3 mode or in USB 2 mode. In USB 2 and USB 3 mode, the device descriptor shows one interface with two alternates. Alternate 0 uses the bulk-only protocol, with one IN and one OUT pipe. Alternate 1 uses the UAS protocol, with two IN and two OUT pipes. I use identical code in my driver to send custom CDBs. I can see using IORegistryExplorer that in USB 2 mode, macOS chooses alternate 0, the bulk-only protocol. My custom CDBs and their accompanying data pay loads are put on the bus, more or less as expected. In USB 3 mode, macOS chooses alternate 1, the UAS protocol. My custom CDB is put on the bus, but no payload data is transferred. Is this expected behavior? If so, is there a way to force the OS to choose alternate 0 even when on USB 3, perhaps with another dext? I'll file a bug about this when Feedback Assistant lets me.
8
0
297
Oct ’25
is com.apple.developer.usb.host-controller-interface managed?
I'm posting this here after reading Quinn's post here: https://developer.apple.com/forums/thread/799000 The above entitlement is mentioned in IOUSBHostControllerInterface.h. It isn't an entitlement one can add using the + button on the Capabilities panel in Xcode. If I try to add it by hand, Xcode complains that it isn't in my profile. Is this a managed entitlement? We'd like to create a local USB "device" to represent a real device reachable over a network.
2
0
325
Sep ’25
DriverKit, USBDriverKit and SystemExtensions
I've watched the video of WWDC 2019 session 702, System Extensions and DriverKit, and I'm still a little puzzled.For instance, what's the point of USBDriverKit, that is, why would I use it in preference to the already extant user-mode USB APIs? The demo shows an extension that does nothing - it logs to the debugger, but it doesn't provide any services to multiple clients in the system. In a KEXT, those services are provided by publishing them in the IORegistry; they provide well-known interfaces in the kernel to which a well-known user client can connect. If my extension ships in my own app, and provides services only to that app, I may as well implement the extension's functions directly in my app.How does my app (or more importantly, a third-party app) communicate with my dext? That wasn't covered in session 702. Neither was the case of replacing or augmenting an existing system driver, for example filtering the data passing through a USB mass storage driver, based on sideband data which the standard system driver cannot convey. For a kext, I would simply call IORegisterService and the rest of the stack would be build on top of my driver.Is the sample code for the demo of session 702 available? Any other sample code for DriverKit?
2
0
1.7k
Apr ’21
why "you do not have permission to open the application" now
For some time I've been sharing an internal macOS app with my colleagues by simply building it locally, zipping it up and emailing, or sharing on Slack or Teams. In the Target Settings in Xcode, Signing and Capabilities, the Team is set to my company, the Signing Certificate is set to Development (not "Sign to run locally"). This has worked for some time. None of the recipients complained that they couldn't run the app. Of course it is not notarized so they need to right-click and select Open the first time around. When I examine the signature of the app I distribute this way, using `codesign -dvvv, the signing authority is me (not my company). One of my colleagues recently migrated to a new Mac Mini M1. On this Mac, when attempting to open the app, he saw the "you do not have permission to open the application" alert. He's supposed to consult his sys admin (himself). I fixed the problem by Archiving a build and explicitly choosing to sign it using the company's Developer ID certificate. The version produced this way has a signing authority of my company, not me, and my colleague can run it. Does anyone know why my previous builds work on other machines for other users? It appears that the locally-built app was actually signed by my personal certificate, although Xcode's UI said it would be signed by my company - but it didn't only work for me? What is the expected behavior if you try to open an app signed with a personal certificate on a machine owned by a different person? Should Security & Privacy offer the option of approving that particular personal certificate?
1
0
1.5k
Jan ’22
how to delete a 'ghost' signing certificate (Xcode 13.2.1)
In my keychain, I have one Developer ID Application certificate, with a private key, for my Team. In Xcode's Accounts/Manage Certificates dialog, there are three Developer ID Application certificates, two of which have a red 'x' badge and the status 'missing private key'. I can right click on any of those three entries and my only enabled choice is "Export". Email creator or Delete are disabled. Why? In my Team's account, there are indeed three Developer ID Application certificates, with different expiration dates, but I only have the private key for one of them. By choosing Manual signing, I can choose a specific certificate from my keychain, but Xcode 13.2.1 tells me that this certificate is missing its private key - but I can see that private key in my keychain!
1
0
1.6k
Mar ’22
Linker error building DEXT as part of app build, -fsanitize=undefined
I'm trying to make a DEXT target within my project. It compiles and links fine if I build just its own scheme. However, if I build my app's target, which includes the DEXT as a dependency, the build fails when linking the DEXT. The linker commands are different in the two cases. When built as part of the larger project, the DEXT linker command includes -fsanitize\=undefined. This flag is absent when I build using the DEXT's scheme alone. I searched the .pbxproj for "sanitize" - it doesn't appear, so it looks like Xcode is adding this flag. The linker failure is this: File not found: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/13.1.6/lib/darwin/libclang_rt.ubsan_driverkit_dynamic.dylib The only files with "driver kit" in their name in that directory are these two: libclang_rt.cc_kext_driverkit.a libclang_rt.driverkit.a The successful link command includes this directive: -lc++ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/13.1.6/lib/darwin/libclang_rt.driverkit.a while the unsuccessful link command includes this one: -lc++ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/13.1.6/lib/darwin/libclang_rt.ubsan_driverkit_dynamic.dylib I tried adding -fno-sanitize=undefined to the OTHER_LINKER_FLAGS for the DEXT target, hoping that this would cancel the effect of the previous -fsanitize, but then I get undefined symbol errors: Undefined symbol: ___ubsan_handle_shift_out_of_bounds Undefined symbol: ___ubsan_handle_type_mismatch_v1 These appear to be referred to by the macros used in the iig magic. I'm using Xcode 13.4.1 (13F100). Does anyone know how I can fix this?
0
0
897
Jun ’22
SwiftUI crash resizing window macOS
Does anyone know why this crashes, or could anyone tell me how to restructure this code so it doesn't crash. (this is FB11917078) I have a view which displays two nested rectangles of a given aspect ratio (here 1:1). The inner rectangle is a fixed fraction of the outer rectangle's size. When embedded in a List, if I rapidly resize the window, the app crashes. If the View is not in a List, there's no crash (and the requested aspect ratio is not respected, which I don't yet know how to fix). Here's the code for the ContentView.swift file. Everything else is a standard macOS SwiftUI application template code from Xcode 14.2. import SwiftUI struct ContentView: View { @State var zoomFactor = 1.2 var body: some View { // rapid resizing of the window causes a crash, // if the TwoRectanglesView is not embedded in a // List, there is no crash List { ZStack { Rectangle() TwoRectanglesView(zoomFactor: $zoomFactor) } } } } struct ContentView_Previews: PreviewProvider { static var previews: some View { ContentView() } } struct TwoRectanglesView: View { @State private var fullViewWidth: CGFloat? @Binding var zoomFactor: Double private let aspectRatio = 1.0 var body: some View { ZStack { Rectangle() .aspectRatio(aspectRatio, contentMode: .fit) GeometryReader { geo in ZStack { Rectangle() .fill(.black) .border(.blue) Rectangle() .fill(.red) .frame(width:geo.size.width/zoomFactor, height: geo.size.height/zoomFactor) } } } } } struct TwoRectanglesView_Previews: PreviewProvider { @State static var zoomFactor = 3.1 static var previews: some View { TwoRectanglesView(zoomFactor: $zoomFactor) } }
0
0
1k
Jan ’23
how to make a DisclosureGroup in a VStack look like one in a List?
I would like to use a DisclosureGroup in a VStack on macOS, but I'd like it to look like a DisclosureGroup in a List. (I need to do this to work around a crash when I embed a particular control in a List). I'll append some code below, and a screenshot. You can see that a List background is white, not grey. The horizontal alignment of the disclosure control itself is different in a List. In a List, the control hangs to the left of the disclosure group's content, so the content is all aligned on its leading edge. Inside a VStack, my VStack with .leading horizontal alignment places the DisclosureGroup so that its leading edge (the leading edge of the disclosure control) is aligned to the leading edge of other elements in the VStack. The List is taking account of the geometry of the disclosure arrow, while the VStack does not. The vertical alignment of the disclosure triangle is also different - in a VStack, the control is placed too high. And finally, in a VStack, the disclosure triangle lacks contrast (its RGB value is about 180, while the triangle in the List has an RGB value of 128). Does anyone know how to emulate the appearance of a DisclosureGroup in a List when that DisclosureGroup is embedded in a VStack? here's my ContentView.swift struct ContentView: View {     var body: some View {         HStack {             List {                 Text("List")                 DisclosureGroup(content: {                     Text("content" )},                          label: {                         Text("some text")                     })             }             VStack(alignment: .leading) {                 Text("VStack")                 DisclosureGroup(content: {                     Text("content" )},                          label: {                         Text("some text")                     })                 Spacer()             }             .padding()         }     } } struct ContentView_Previews: PreviewProvider {     static var previews: some View {         ContentView()     } }
0
0
1.2k
Mar ’23
should an AVPlayer work in a Camera Extension?
My goal is to implement a moving background in a virtual camera, implemented as a Camera Extension, on macOS 13 and later. The moving background is available to the extension as a H.264 file in its bundle. I thought i could create an AVAsset from the movie's URL, make an AVPlayerItem from the asset, attach an AVQueuePlayer to the item, then attach an AVPlayerLooper to the queue player. I make an AVPlayerVideoOutput and add it to each of the looper's items, and set a delegate on the video output. This works in a normal app, which I use as a convenient environment to debug my extension code. In my camera video rendering loop, I check self.videoOutput.hasNewPixelBuffer , it returns true at regular intervals, I can fetch video frames with the video output's copyPixelBuffer and composite those frames with the camera frames. However, it doesn't work in an extension - hasNewPixelBuffer is never true. The looping player returns 'failed', with an error which simply says "the operation could not be completed". I've tried simplifying things by removing the AVPlayerLooper and using an AVPlayer instead of an AVQueuePlayer, so the movie would only play once through. But still, I never get any frames in the extension. Could this be a sandbox thing, because an AVPlayer usually renders to a user interface, and camera extensions don't have UIs? My fallback solution is to use an AVAssetImageGenerator which I attempt to drive by firing off a Task for each frame each time I want to render one, I ask for another frame to keep the pipeline full. Unfortunately the Tasks don't finish in the same order they are started so I have to build frame-reordering logic into the frame buffer (something which a player would fix for me). I'm also not sure whether the AVAssetImageGenerator is taking advantage of any hardware acceleration, and it seems inefficient because each Task is for one frame only, and cannot maintain any state from previous frames. Perhaps there's a much simpler way to do this and I'm just missing it? Anyone?
2
0
1.4k
Aug ’23