Post

Replies

Boosts

Views

Activity

Setting up video and image capture pipeline creates internal errors in AVFoundation.
I have created code for iOS that allows me to start and stop video acquisition from a proprietary USB camera using AVFoundation's AVCaptureSession and AVCaptureDevice APIs. There is a start and stop method. The start method takes an argument to specify one of two formats that I use for my custom camera application. I can start the session and switch between formats all day without any errors. However, if I start and then stop the camera three times in a row, on the third invocation of start, I get errors in the console output and the CMSampleBuffers stop flowing to my callback. Additionally, once I get AVFoundation into this state, stoping the camera doesn't help. I have to kill the app and start over. Here are the errors. And below these, the code. I'm hoping someone who has experience with these errors or an engineer from Apple who knows the AVFoundation image capture pipeline code, can respond and tell me what I'm doing wrong. Thanks. <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:558) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:269) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:511) - (err=-16453) Capture session error: The operation could not be completed Capture session error: The operation could not be completed func start(for deviceFormat: String) async throws -> AnyPublisher<CMSampleBuffer, Swift.Error> { func configureCaptureDevice(with deviceFormat: String) throws { guard let format = formatDict[deviceFormat] else { throw Error.captureFormatNotFound } captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } try captureDevice.lockForConfiguration() captureDeviceFormat = deviceFormat captureDevice.activeFormat = format captureDevice.unlockForConfiguration() } return try await withCheckedThrowingContinuation { continuation in sessionQueue.async { [unowned self] in logger.debug("Start capture session for \(deviceFormat): \(String(describing: captureSession))") // If we were already steaming camera images from a different mode, terminate that stream. bufferPublisher?.send(completion: .finished) bufferPublisher = nil captureDeviceFormat = "" do { // Re-configure with the new format; should be harmless if called with the currently configured format. try configureCaptureDevice(with: deviceFormat) // Return a new stream publisher for this invocation. bufferPublisher = PassthroughSubject<CMSampleBuffer, Swift.Error>() // If we are not currently running, start the image capture pipeline. if captureSession.isRunning == false { captureSession.startRunning() } continuation.resume(returning: bufferPublisher!.eraseToAnyPublisher()) } catch { logger.fault("Failed to start camera: \(error.localizedDescription)") continuation.resume(throwing: error) } } } } func stop() async throws { try await withCheckedThrowingContinuation { continuation in sessionQueue.async { [unowned self] in logger.debug("Stop capture session: \(String(describing: captureSession))") // The following invocation is synchronous and takes time to execute; // looks like a stall but you can ignore it as the MainActor is not blocked. captureSession.stopRunning() // Terminate the stream and reset our state. bufferPublisher?.send(completion: .finished) bufferPublisher = nil captureDeviceFormat = "" // Signal the caller that we are done here. continuation.resume() } } }
0
0
184
2w
How to safely switch between mic configurations on iOS?
I have an iPadOS M-processor application with two different running configurations. In config1, the shared AVAudioSession is configured for .videoChat mode using the built-in microphone. The input/output nodes of the AVAudioEngine are configured with voice processing enabled. The built-in mic is formatted for 1 channel at 48KHz. In config2, the shared AVAudioSession is configured for .measurement mode using an external USB microphone. The input/output nodes of the AVAudioEngine are configured with voice processing disabled. The external mic is formatted for 2 channels at 44.1KHz I've written a configuration manager designed to safely switch between these two configurations. It works by stopping AVAudioEngine and detaching all but the input and output nodes, updating the shared audio session for the desired mic and sample-rates, and setting the appropriate state for voice processing to either true or false as required by the configuration. Finally the new audio graph is constructed by attaching appropriate nodes, connecting them, and re-starting AVAudioEngine I'm experiencing what I believe is a race-condition between switching voice processing on or off and then trying to re-build and start the new audio graph. Even though notifications, which are dumped to the console indicate that my requested input and sample-rate settings are in place, I crash when trying to start the audio engine because the sample-rate is wrong. Investigating further it looks like the switch from remote I/O to voice-processing I/O or vice-versa has not yet actually completed. I introduced a 100ms second delay and that seems to help but is obviously not a reliable way to build software that must work consistently. How can I make sure that what are apparently asynchronous configuration changes to the shared audio session and the input/output nodes have completed before I go on? I tried using route change notifications from the shared AVAudioSession but these lie. They say my preferred mic input and sample-rate setting is in place but when I dump the AVAudioEngine graph to the debugger console, I still see the wrong sample rate assigned to the input/output nodes. Also these are the wrong AU nodes. That is, VPIO is still in place when RIO should be, or vice-versa. How can I make the switch reliable without arbitrary time delays? Is my configuration manager approach appropriate (question for Apple engineers)?
1
0
360
Nov ’25
DriverKit: embedded.mobileprofile has the wildcard USB Vendor ID instead of my assigned Vendor ID
I've added my Vendor ID to the appropriate entitlement files but my binary fails validation when trying to upload it to the store for distribution. The embeded.mobileprovision file in the generated archive shows an asterisk instead of my approved Vendor ID. How can I make sure the embedded provisioning file has my Vendor ID?
5
0
1.9k
Apr ’25
Error -50 writing to AVAudioFile
I'm trying to write 16-bit interleaved 2-channel data captured from a LiveSwitch audio source to a AVAudioFile. The buffer and file formats match but I get a bad parameter error from the API. Does this API not support the specified format or is there some other issue? Here is the debugger output. (lldb) po audioFile.url ▿ file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201 - _url : file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201 - _parseInfo : nil - _baseParseInfo : nil (lldb) po error Error Domain=com.apple.coreaudio.avfaudio Code=-50 "(null)" UserInfo={failed call=ExtAudioFileWrite(_impl->_extAudioFile, buffer.frameLength, buffer.audioBufferList)} (lldb) po buffer.format <AVAudioFormat 0x302a12b20: 2 ch, 44100 Hz, Int16, interleaved> (lldb) po audioFile.fileFormat <AVAudioFormat 0x302a515e0: 2 ch, 44100 Hz, Int16, interleaved> (lldb) po buffer.frameLength 882 (lldb) po buffer.audioBufferList ▿ 0x0000000300941e60 - pointerValue : 12894608992 This code handles the details of converting the Live Switch frame into an AVAudioPCMBuffer. extension FMLiveSwitchAudioFrame { func convertedToPCMBuffer() -> AVAudioPCMBuffer { Self.convertToAVAudioPCMBuffer(from: self)! } static func convertToAVAudioPCMBuffer(from frame: FMLiveSwitchAudioFrame) -> AVAudioPCMBuffer? { // Retrieve the audio buffer and format details from the FMLiveSwitchAudioFrame guard let buffer = frame.buffer(), let format = buffer.format() as? FMLiveSwitchAudioFormat else { return nil } // Extract PCM format details from FMLiveSwitchAudioFormat let sampleRate = Double(format.clockRate()) let channelCount = AVAudioChannelCount(format.channelCount()) // Determine bytes per sample based on bit depth let bitsPerSample = 16 let bytesPerSample = bitsPerSample / 8 let bytesPerFrame = bytesPerSample * Int(channelCount) let frameLength = AVAudioFrameCount(Int(buffer.dataBuffer().length()) / bytesPerFrame) // Create an AVAudioFormat from the FMLiveSwitchAudioFormat guard let avAudioFormat = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: sampleRate, channels: channelCount, interleaved: true) else { return nil } // Create an AudioBufferList to wrap the existing buffer let audioBufferList = UnsafeMutablePointer<AudioBufferList>.allocate(capacity: 1) audioBufferList.pointee.mNumberBuffers = 1 audioBufferList.pointee.mBuffers.mNumberChannels = channelCount audioBufferList.pointee.mBuffers.mDataByteSize = UInt32(buffer.dataBuffer().length()) audioBufferList.pointee.mBuffers.mData = buffer.dataBuffer().data().mutableBytes // Directly use LiveSwitch buffer // Transfer ownership of the buffer to AVAudioPCMBuffer let pcmBuffer = AVAudioPCMBuffer(pcmFormat: avAudioFormat, bufferListNoCopy: audioBufferList) /* { buffer in // Ensure the buffer is freed when AVAudioPCMBuffer is deallocated buffer.deallocate() // Only call this if LiveSwitch allows manual deallocation } */ pcmBuffer?.frameLength = frameLength return pcmBuffer } } This is the handler that is invoked with every frame in order to convert it for use with AVAudioFile and optionally update a scrolling signal display on the screen. private func onRaisedFrame(obj: Any!) -> Void { // Bail out early if no one is interested in the data. guard isMonitoring else { return } // Convert LS frame to AVAudioPCMBuffer (no-copy) let frame = obj as! FMLiveSwitchAudioFrame let buffer = frame.convertedToPCMBuffer() // Hand subscribers a reference to the buffer for rendering to display. bufferPublisher?.send(buffer) // If we have and output file, store the data there, as well. guard let audioFile = self.audioFile else { return } do { try audioFile.write(from: buffer) // FIXME: This call is throwing error -50 } catch { FMLiveSwitchLog.error(withMessage: "Failed to write buffer to audio file at \(audioFile.url): \(error)") self.audioFile = nil } } This is how the audio file is being setup. static var recordingFormat: AVAudioFormat = { AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44_100, channels: 2, interleaved: true)! }() let audioFile = try AVAudioFile(forWriting: outputURL, settings: Self.recordingFormat.settings)
0
0
473
Mar ’25
Looking for USBSerialDriver sample code
I would like to write a driver that supports our custom USB-C connected device, which provides a serial port interface. USBSerialDriverKit looks like the solution I need. Unfortunately, without a decent sample, I'm not sure how to accomplish this. The DriverKit documentation does a good job of telling me what APIs exist but it is very light on semantic information and details about how to use all of these API elements. A function call with five unexplained parameters just is that useful to me. Does anyone have or know of a resource that can help me figure out how to get started?
1
0
797
Feb ’25
AVAudioPlayer delegate causing retain cycle.
It appears that AVAudioPlayer is maintaining a strong reference to my containing class. Here is the essential code. Pay attention to the comments. class StethRecording: NSObject, ObservableObject, Identifiable { let player: AVAudioPlayer? let id = UUID() @Published var isPlaying = false @Published var progress = 0.0 init(file: AVAudioFile) throws { player = try AVAudioPlayer(contentsOf: file.url) super.init() // I used to assign the player delegate here. // If I do that, when I delete this object, it // doesn't go away. player!.prepareToPlay() } deinit { // If this object doesn't go away, I leave data. // behind. Something I don't want to do. try? deleteAssociatedAudioFile() } func play() { guard let player else { return } // So now I have to assign the delegate whenever // I start playing. player.delegate = self isPlaying = true player.play() startUpdateTimer() } func stop() { guard let player else { return } player.stop() playbackConcluded() } // MARK: - Private Methods private func playbackConcluded() { isPlaying = false stopUpdateTimer() updateProgress() player!.reset() // I also have to remove the delegate when I // stop, for any reason. player!.delegate = nil player!.prepareToPlay() } } extension StethRecording: AVAudioPlayerDelegate { func audioPlayerDidFinishPlaying(_ player: AVAudioPlayer, successfully flag: Bool) { playbackConcluded() } } This works, but is this approach really necessary? I would expect the AVAudioPlayer to use a weak reference for the delegate. Or, am I doing something else wrong here?
1
0
567
Oct ’24
AVAudioFile.processingFormat, only Float32 is allowed?
Here is some code I have to create an AVAudioFile instance based on Int16 samples. let format = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44100.0, channels: 2, interleaved: false)! let audioFile = try AVAudioFile(forWriting: outputURL, settings: format.settings) When writing to the file I get the following runtime error, presumably from CoreAudio. CABufferList.h:184 ASSERTION FAILURE [(nBytes <= buf->mDataByteSize) != 0 is false]: I read this as a size mismatch between what is specified in the format used to create the file and the file's own internal processingFormat property, which is read-only. Here is my debugger console output showing the input format I created, along with the resulting AVAudioFile fileFormat and processingFormat properties. (lldb) po format <AVAudioFormat 0x300e553b0: 2 ch, 44100 Hz, Int16, deinterleaved> (lldb) po format.settings ▿ 7 elements ▿ 0 : 2 elements - key : "AVNumberOfChannelsKey" - value : 2 ▿ 1 : 2 elements - key : "AVLinearPCMBitDepthKey" - value : 16 ▿ 2 : 2 elements - key : "AVFormatIDKey" - value : 1819304813 ▿ 3 : 2 elements - key : "AVLinearPCMIsNonInterleaved" - value : 1 ▿ 4 : 2 elements - key : "AVLinearPCMIsBigEndianKey" - value : 0 ▿ 5 : 2 elements - key : "AVLinearPCMIsFloatKey" - value : 0 ▿ 6 : 2 elements - key : "AVSampleRateKey" - value : 44100 (lldb) po audioFile.fileFormat <AVAudioFormat 0x300ea5400: 2 ch, 44100 Hz, Int16, interleaved> (lldb) po audioFile.processingFormat <AVAudioFormat 0x300ea5450: 2 ch, 44100 Hz, Float32, deinterleaved> Please note that the input format I'm using does not match either the audio file fileFormat or processingFormat properties. The file format is interleaved even though I specified de-interleaved. This makes sense to me as working with audio files that are growing is much easier and more efficient with interleaved data. The head-scratcher is the processingFormat. I specified Int16 samples and it is expecting Float32? According to the format settings dictionary, we are specifying the correct key/value pairs. Is this expected behavior? Does Apple always insist on Float32 internally or is this a bug?
0
0
718
Oct ’24
USBDriverKit driver not showing up in settings on iPadOS 18
In iPadOS 17.7 my driver shows up in settings just fine. After recompiling with Xcode 16 and installing my app (containing my driver) on iPadOS 18, the app shows up in settings but the driver-enable button is missing from Settings. When I plug-in my custom USB device, the app cannot detect it and I am left with no way to manually enable the driver, as I did in the previous version of iPadOS.
1
0
735
Sep ’24
AVFoundation: Strange error while trying to switch camera formats with the touch of a single button.
I'm getting the following output from my iOS app's debug console, note the error on the last line: Capture format keys: ["600x600@25", "1200x1200@5", "1200x1200@30", "1600x1200@2", "1600x1200@30", "3200x2400@15", "3200x2400@2", "600x600@30"] Start capture session for 1600x1200@30: <AVCaptureSession: 0x303c70190 [AVCaptureSessionPresetPhoto]> Stop capture session: <AVCaptureSession: 0x303c70190 [AVCaptureSessionPresetInputPriority]> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoDataOutput: 0x303edf1e0> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCapturePhotoOutput: 0x303ee3e20> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoPreviewLayer: 0x3030b33c0> Start capture session for 600x600@30: <AVCaptureSession: 0x303c70190 [AVCaptureSessionPresetInputPriority]> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoDataOutput: 0x303edf1e0> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCapturePhotoOutput: 0x303ee3e20> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoPreviewLayer: 0x3030b33c0> <<<< FigSharedMemPool >>>> Fig assert: "blkHdr->useCount > 0" at (FigSharedMemPool.c:591) - (err=0) This is in response to trying to switch capture formats between the two key modes that must regularly be used by my application. Below you will find the functions that I use to start and stop capturing frames to my preview layer. I have a UI with three buttons. Off Mode 1 Mode 2 If I tap the Off button in between tapping Mode 1 or Mode 2 all is well; I can do this all day. However, attempt to jump between Mode 1andMode 2` directly I run into issues. I added a layer of software between the UI and the underlying functions so that I could make sure to turn off the Camera before turning it back on in the opposite mode and was surprised to get this output. Can someone at Apple please tell me what is going on here? For the rest of you, if anyone knows the magic incantation to safely switch camera formats, please paste that code here. Thanks. I've included my code below. func start(for deviceFormat: String) { sessionQueue.async { [unowned self] in logger.debug("Start capture session for \(deviceFormat): \(self.captureSession)") do { guard let format = formatDict[deviceFormat] else { throw Error.captureFormatNotFound } captureSession.stopRunning() captureSession.beginConfiguration() // May not be necessary. try captureDevice.lockForConfiguration() // Without this we get an error. captureDevice.activeFormat = format captureDevice.unlockForConfiguration() // Matching function: Necessary. captureSession.commitConfiguration() // Matching function: May not be necessary. captureSession.startRunning() } catch { logger.fault("Failed to start camera: \(error.localizedDescription)") errorPublisher.send(error) } } } func stop() { sessionQueue.async { [unowned self] in logger.debug("Stop capture session: \(self.captureSession)") captureSession.stopRunning() } }
0
0
902
May ’24
Core Animation Layer in SwiftUI app does not update unless I rotate the device.
I am trying to create a near real-time drawing of waveform data from within a SwiftUI app. The data is streaming in from the hardware and I've verified that the draw(in ctx: CGContext) override in my custom CALayer is getting called. I have added this custom CALayer class as a sublayer to a UIView instance that I am making available via the UIViewRepresentable protocol. The only time I see updated output from the CALayer is when I rotate the device and layout happens (I assume). How can I force SwiftUI to update every time I render new data in my CALayer? More Info: I'm porting an app from the Windows desktop. Previously, I tried to make this work by simply generating a new UIImage from a CGContext every time I wanted to update the display. I quickly exhausted memory with that technique because a new context is being created every time I call UIGraphicsImageRenderer(size:).image { context in }. What I really wanted was something equivalent to a GDI WritableBitmap. Apparently this animal allows a programmer to continuously update and re-use the contents. I could not figure out how to do this in Swift without dropping down to the old CGBitmapContext stuff written in C and even then I wasn't sure if that would give me a reusable context that I could output in SwiftUI each time I refreshed it. CALayer seemed like the answer. I welcome any feedback on a better way to do what I'm trying to accomplish.
2
0
1.3k
May ’24
How does one create a provisioning profile for embedded DEXT for iPhoneOS that is signed with a distribution cert?
I've been developing a solution that has an embedded USB driver. I can build and run my solution just fine but I cannot pass verification for uploading to App Store Correct and TestFlight The problem is that the provisioning profile I am using (for development) does not have the explicit Vendor ID (idVendor) but is using the development value of asterisk "*". I've created a release version of my entitlements file with the proper Vendor ID and I have a distribution certificate for iOS. Further, I've created a provisioning profile for app-store distribution (not development) and imported it via Xcode. When I select this provisioning profile, I get the following errors from Xcode: Xcode 14 and later requires a DriverKit development profile enabled for iOS and macOS. Visit the developer website to create or download a DriverKit profile. Provisioning profile "MyProvisioningProfile - App Store" doesn't match the entitlements file's value for the com.apple.developer.driverkit.transport.usb entitlement. If I create and use a DriverKit profile, The Xcode UI errors go away on the "Signing & Capabilities" page. However, these profiles seem to be for development only. I then get an error, during compilation, telling me that the app and extension have two different signers, one for development (DEXT) and one for distribution (App). To sum up, using a DriverKit profile fails during the build process and using a distribution profile is a non-starter for Xcode. I can't even build. What do I need to do to get this to work?
2
0
1.2k
May ’24
AVCaptureVideoPreviewLayer transforms are not working. Need pan/zoom solution.
I am implementing pan and zoom features for an app using a custom USB camera device, in iPadOS. I am using an update function (shown below) to apply transforms for scale and translation but they are not working. By re-enabling the animation I can see that the scale translation seems to initially take effect but then the image animates back to its original scale. This all happens in a fraction of a second but I can see it. The translation transform seems to have no effect at all. Printing out the value of AVCaptureVideoPreviewLayer.transform before and after does show that my values have been applied. private func updateTransform() { #if false // Disable default animation. CATransaction.begin() CATransaction.setDisableActions(true) defer { CATransaction.commit() } #endif // Apply the transform. logger.debug("\(String(describing: self.videoPreviewLayer.transform))") let transform = CATransform3DIdentity let translate = CATransform3DTranslate(transform, translationX, translationY, 0) let scale = CATransform3DScale(transform, scale, scale, 1) videoPreviewLayer.transform = CATransform3DConcat(translate, scale) logger.debug("\(String(describing: self.videoPreviewLayer.transform))") } My question is this, how can I properly implement pan/zoom for an AVCaptureVideoPreviewLayer? Or even better, if you see a problem with my current approach or understand why the transforms I am applying do not work, please share that information.
0
0
908
May ’24
Setting up video and image capture pipeline creates internal errors in AVFoundation.
I have created code for iOS that allows me to start and stop video acquisition from a proprietary USB camera using AVFoundation's AVCaptureSession and AVCaptureDevice APIs. There is a start and stop method. The start method takes an argument to specify one of two formats that I use for my custom camera application. I can start the session and switch between formats all day without any errors. However, if I start and then stop the camera three times in a row, on the third invocation of start, I get errors in the console output and the CMSampleBuffers stop flowing to my callback. Additionally, once I get AVFoundation into this state, stoping the camera doesn't help. I have to kill the app and start over. Here are the errors. And below these, the code. I'm hoping someone who has experience with these errors or an engineer from Apple who knows the AVFoundation image capture pipeline code, can respond and tell me what I'm doing wrong. Thanks. <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:558) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:269) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:511) - (err=-16453) Capture session error: The operation could not be completed Capture session error: The operation could not be completed func start(for deviceFormat: String) async throws -> AnyPublisher<CMSampleBuffer, Swift.Error> { func configureCaptureDevice(with deviceFormat: String) throws { guard let format = formatDict[deviceFormat] else { throw Error.captureFormatNotFound } captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } try captureDevice.lockForConfiguration() captureDeviceFormat = deviceFormat captureDevice.activeFormat = format captureDevice.unlockForConfiguration() } return try await withCheckedThrowingContinuation { continuation in sessionQueue.async { [unowned self] in logger.debug("Start capture session for \(deviceFormat): \(String(describing: captureSession))") // If we were already steaming camera images from a different mode, terminate that stream. bufferPublisher?.send(completion: .finished) bufferPublisher = nil captureDeviceFormat = "" do { // Re-configure with the new format; should be harmless if called with the currently configured format. try configureCaptureDevice(with: deviceFormat) // Return a new stream publisher for this invocation. bufferPublisher = PassthroughSubject<CMSampleBuffer, Swift.Error>() // If we are not currently running, start the image capture pipeline. if captureSession.isRunning == false { captureSession.startRunning() } continuation.resume(returning: bufferPublisher!.eraseToAnyPublisher()) } catch { logger.fault("Failed to start camera: \(error.localizedDescription)") continuation.resume(throwing: error) } } } } func stop() async throws { try await withCheckedThrowingContinuation { continuation in sessionQueue.async { [unowned self] in logger.debug("Stop capture session: \(String(describing: captureSession))") // The following invocation is synchronous and takes time to execute; // looks like a stall but you can ignore it as the MainActor is not blocked. captureSession.stopRunning() // Terminate the stream and reset our state. bufferPublisher?.send(completion: .finished) bufferPublisher = nil captureDeviceFormat = "" // Signal the caller that we are done here. continuation.resume() } } }
Replies
0
Boosts
0
Views
184
Activity
2w
How to safely switch between mic configurations on iOS?
I have an iPadOS M-processor application with two different running configurations. In config1, the shared AVAudioSession is configured for .videoChat mode using the built-in microphone. The input/output nodes of the AVAudioEngine are configured with voice processing enabled. The built-in mic is formatted for 1 channel at 48KHz. In config2, the shared AVAudioSession is configured for .measurement mode using an external USB microphone. The input/output nodes of the AVAudioEngine are configured with voice processing disabled. The external mic is formatted for 2 channels at 44.1KHz I've written a configuration manager designed to safely switch between these two configurations. It works by stopping AVAudioEngine and detaching all but the input and output nodes, updating the shared audio session for the desired mic and sample-rates, and setting the appropriate state for voice processing to either true or false as required by the configuration. Finally the new audio graph is constructed by attaching appropriate nodes, connecting them, and re-starting AVAudioEngine I'm experiencing what I believe is a race-condition between switching voice processing on or off and then trying to re-build and start the new audio graph. Even though notifications, which are dumped to the console indicate that my requested input and sample-rate settings are in place, I crash when trying to start the audio engine because the sample-rate is wrong. Investigating further it looks like the switch from remote I/O to voice-processing I/O or vice-versa has not yet actually completed. I introduced a 100ms second delay and that seems to help but is obviously not a reliable way to build software that must work consistently. How can I make sure that what are apparently asynchronous configuration changes to the shared audio session and the input/output nodes have completed before I go on? I tried using route change notifications from the shared AVAudioSession but these lie. They say my preferred mic input and sample-rate setting is in place but when I dump the AVAudioEngine graph to the debugger console, I still see the wrong sample rate assigned to the input/output nodes. Also these are the wrong AU nodes. That is, VPIO is still in place when RIO should be, or vice-versa. How can I make the switch reliable without arbitrary time delays? Is my configuration manager approach appropriate (question for Apple engineers)?
Replies
1
Boosts
0
Views
360
Activity
Nov ’25
DriverKit: embedded.mobileprofile has the wildcard USB Vendor ID instead of my assigned Vendor ID
I've added my Vendor ID to the appropriate entitlement files but my binary fails validation when trying to upload it to the store for distribution. The embeded.mobileprovision file in the generated archive shows an asterisk instead of my approved Vendor ID. How can I make sure the embedded provisioning file has my Vendor ID?
Replies
5
Boosts
0
Views
1.9k
Activity
Apr ’25
Error -50 writing to AVAudioFile
I'm trying to write 16-bit interleaved 2-channel data captured from a LiveSwitch audio source to a AVAudioFile. The buffer and file formats match but I get a bad parameter error from the API. Does this API not support the specified format or is there some other issue? Here is the debugger output. (lldb) po audioFile.url ▿ file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201 - _url : file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201 - _parseInfo : nil - _baseParseInfo : nil (lldb) po error Error Domain=com.apple.coreaudio.avfaudio Code=-50 "(null)" UserInfo={failed call=ExtAudioFileWrite(_impl->_extAudioFile, buffer.frameLength, buffer.audioBufferList)} (lldb) po buffer.format <AVAudioFormat 0x302a12b20: 2 ch, 44100 Hz, Int16, interleaved> (lldb) po audioFile.fileFormat <AVAudioFormat 0x302a515e0: 2 ch, 44100 Hz, Int16, interleaved> (lldb) po buffer.frameLength 882 (lldb) po buffer.audioBufferList ▿ 0x0000000300941e60 - pointerValue : 12894608992 This code handles the details of converting the Live Switch frame into an AVAudioPCMBuffer. extension FMLiveSwitchAudioFrame { func convertedToPCMBuffer() -> AVAudioPCMBuffer { Self.convertToAVAudioPCMBuffer(from: self)! } static func convertToAVAudioPCMBuffer(from frame: FMLiveSwitchAudioFrame) -> AVAudioPCMBuffer? { // Retrieve the audio buffer and format details from the FMLiveSwitchAudioFrame guard let buffer = frame.buffer(), let format = buffer.format() as? FMLiveSwitchAudioFormat else { return nil } // Extract PCM format details from FMLiveSwitchAudioFormat let sampleRate = Double(format.clockRate()) let channelCount = AVAudioChannelCount(format.channelCount()) // Determine bytes per sample based on bit depth let bitsPerSample = 16 let bytesPerSample = bitsPerSample / 8 let bytesPerFrame = bytesPerSample * Int(channelCount) let frameLength = AVAudioFrameCount(Int(buffer.dataBuffer().length()) / bytesPerFrame) // Create an AVAudioFormat from the FMLiveSwitchAudioFormat guard let avAudioFormat = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: sampleRate, channels: channelCount, interleaved: true) else { return nil } // Create an AudioBufferList to wrap the existing buffer let audioBufferList = UnsafeMutablePointer<AudioBufferList>.allocate(capacity: 1) audioBufferList.pointee.mNumberBuffers = 1 audioBufferList.pointee.mBuffers.mNumberChannels = channelCount audioBufferList.pointee.mBuffers.mDataByteSize = UInt32(buffer.dataBuffer().length()) audioBufferList.pointee.mBuffers.mData = buffer.dataBuffer().data().mutableBytes // Directly use LiveSwitch buffer // Transfer ownership of the buffer to AVAudioPCMBuffer let pcmBuffer = AVAudioPCMBuffer(pcmFormat: avAudioFormat, bufferListNoCopy: audioBufferList) /* { buffer in // Ensure the buffer is freed when AVAudioPCMBuffer is deallocated buffer.deallocate() // Only call this if LiveSwitch allows manual deallocation } */ pcmBuffer?.frameLength = frameLength return pcmBuffer } } This is the handler that is invoked with every frame in order to convert it for use with AVAudioFile and optionally update a scrolling signal display on the screen. private func onRaisedFrame(obj: Any!) -> Void { // Bail out early if no one is interested in the data. guard isMonitoring else { return } // Convert LS frame to AVAudioPCMBuffer (no-copy) let frame = obj as! FMLiveSwitchAudioFrame let buffer = frame.convertedToPCMBuffer() // Hand subscribers a reference to the buffer for rendering to display. bufferPublisher?.send(buffer) // If we have and output file, store the data there, as well. guard let audioFile = self.audioFile else { return } do { try audioFile.write(from: buffer) // FIXME: This call is throwing error -50 } catch { FMLiveSwitchLog.error(withMessage: "Failed to write buffer to audio file at \(audioFile.url): \(error)") self.audioFile = nil } } This is how the audio file is being setup. static var recordingFormat: AVAudioFormat = { AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44_100, channels: 2, interleaved: true)! }() let audioFile = try AVAudioFile(forWriting: outputURL, settings: Self.recordingFormat.settings)
Replies
0
Boosts
0
Views
473
Activity
Mar ’25
Looking for USBSerialDriver sample code
I would like to write a driver that supports our custom USB-C connected device, which provides a serial port interface. USBSerialDriverKit looks like the solution I need. Unfortunately, without a decent sample, I'm not sure how to accomplish this. The DriverKit documentation does a good job of telling me what APIs exist but it is very light on semantic information and details about how to use all of these API elements. A function call with five unexplained parameters just is that useful to me. Does anyone have or know of a resource that can help me figure out how to get started?
Replies
1
Boosts
0
Views
797
Activity
Feb ’25
DriverKit Entitlements, one per App ID?
We have a DriverKit entitlement for our USB driver. We now wish to use this same driver with a variant of our existing application. Of course this application has its own separate App ID and will be published in the App Store alongside our existing application. Will we need to go back to the well and ask Apple for another entitlement?
Replies
1
Boosts
0
Views
794
Activity
Dec ’24
AVAudioPlayer delegate causing retain cycle.
It appears that AVAudioPlayer is maintaining a strong reference to my containing class. Here is the essential code. Pay attention to the comments. class StethRecording: NSObject, ObservableObject, Identifiable { let player: AVAudioPlayer? let id = UUID() @Published var isPlaying = false @Published var progress = 0.0 init(file: AVAudioFile) throws { player = try AVAudioPlayer(contentsOf: file.url) super.init() // I used to assign the player delegate here. // If I do that, when I delete this object, it // doesn't go away. player!.prepareToPlay() } deinit { // If this object doesn't go away, I leave data. // behind. Something I don't want to do. try? deleteAssociatedAudioFile() } func play() { guard let player else { return } // So now I have to assign the delegate whenever // I start playing. player.delegate = self isPlaying = true player.play() startUpdateTimer() } func stop() { guard let player else { return } player.stop() playbackConcluded() } // MARK: - Private Methods private func playbackConcluded() { isPlaying = false stopUpdateTimer() updateProgress() player!.reset() // I also have to remove the delegate when I // stop, for any reason. player!.delegate = nil player!.prepareToPlay() } } extension StethRecording: AVAudioPlayerDelegate { func audioPlayerDidFinishPlaying(_ player: AVAudioPlayer, successfully flag: Bool) { playbackConcluded() } } This works, but is this approach really necessary? I would expect the AVAudioPlayer to use a weak reference for the delegate. Or, am I doing something else wrong here?
Replies
1
Boosts
0
Views
567
Activity
Oct ’24
AVAudioFile.processingFormat, only Float32 is allowed?
Here is some code I have to create an AVAudioFile instance based on Int16 samples. let format = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44100.0, channels: 2, interleaved: false)! let audioFile = try AVAudioFile(forWriting: outputURL, settings: format.settings) When writing to the file I get the following runtime error, presumably from CoreAudio. CABufferList.h:184 ASSERTION FAILURE [(nBytes <= buf->mDataByteSize) != 0 is false]: I read this as a size mismatch between what is specified in the format used to create the file and the file's own internal processingFormat property, which is read-only. Here is my debugger console output showing the input format I created, along with the resulting AVAudioFile fileFormat and processingFormat properties. (lldb) po format <AVAudioFormat 0x300e553b0: 2 ch, 44100 Hz, Int16, deinterleaved> (lldb) po format.settings ▿ 7 elements ▿ 0 : 2 elements - key : "AVNumberOfChannelsKey" - value : 2 ▿ 1 : 2 elements - key : "AVLinearPCMBitDepthKey" - value : 16 ▿ 2 : 2 elements - key : "AVFormatIDKey" - value : 1819304813 ▿ 3 : 2 elements - key : "AVLinearPCMIsNonInterleaved" - value : 1 ▿ 4 : 2 elements - key : "AVLinearPCMIsBigEndianKey" - value : 0 ▿ 5 : 2 elements - key : "AVLinearPCMIsFloatKey" - value : 0 ▿ 6 : 2 elements - key : "AVSampleRateKey" - value : 44100 (lldb) po audioFile.fileFormat <AVAudioFormat 0x300ea5400: 2 ch, 44100 Hz, Int16, interleaved> (lldb) po audioFile.processingFormat <AVAudioFormat 0x300ea5450: 2 ch, 44100 Hz, Float32, deinterleaved> Please note that the input format I'm using does not match either the audio file fileFormat or processingFormat properties. The file format is interleaved even though I specified de-interleaved. This makes sense to me as working with audio files that are growing is much easier and more efficient with interleaved data. The head-scratcher is the processingFormat. I specified Int16 samples and it is expecting Float32? According to the format settings dictionary, we are specifying the correct key/value pairs. Is this expected behavior? Does Apple always insist on Float32 internally or is this a bug?
Replies
0
Boosts
0
Views
718
Activity
Oct ’24
USBDriverKit driver not showing up in settings on iPadOS 18
In iPadOS 17.7 my driver shows up in settings just fine. After recompiling with Xcode 16 and installing my app (containing my driver) on iPadOS 18, the app shows up in settings but the driver-enable button is missing from Settings. When I plug-in my custom USB device, the app cannot detect it and I am left with no way to manually enable the driver, as I did in the previous version of iPadOS.
Replies
1
Boosts
0
Views
735
Activity
Sep ’24
AVCam Example: Can I use a Actor instead of a DispatchQueue for capture session activity?
I've been studying the AVCam example and notice that everything pertaining to state transitions for the capture session is performed on a dedicated DispatchQueue. My question is this: Can I use an actor instead?
Replies
3
Boosts
1
Views
1.6k
Activity
May ’24
AVFoundation: Strange error while trying to switch camera formats with the touch of a single button.
I'm getting the following output from my iOS app's debug console, note the error on the last line: Capture format keys: ["600x600@25", "1200x1200@5", "1200x1200@30", "1600x1200@2", "1600x1200@30", "3200x2400@15", "3200x2400@2", "600x600@30"] Start capture session for 1600x1200@30: <AVCaptureSession: 0x303c70190 [AVCaptureSessionPresetPhoto]> Stop capture session: <AVCaptureSession: 0x303c70190 [AVCaptureSessionPresetInputPriority]> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoDataOutput: 0x303edf1e0> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCapturePhotoOutput: 0x303ee3e20> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoPreviewLayer: 0x3030b33c0> Start capture session for 600x600@30: <AVCaptureSession: 0x303c70190 [AVCaptureSessionPresetInputPriority]> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoDataOutput: 0x303edf1e0> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCapturePhotoOutput: 0x303ee3e20> <AVCaptureDeviceInput: 0x303ebb720 [Medwand S3 Camera]>[vide] -> <AVCaptureVideoPreviewLayer: 0x3030b33c0> <<<< FigSharedMemPool >>>> Fig assert: "blkHdr->useCount > 0" at (FigSharedMemPool.c:591) - (err=0) This is in response to trying to switch capture formats between the two key modes that must regularly be used by my application. Below you will find the functions that I use to start and stop capturing frames to my preview layer. I have a UI with three buttons. Off Mode 1 Mode 2 If I tap the Off button in between tapping Mode 1 or Mode 2 all is well; I can do this all day. However, attempt to jump between Mode 1andMode 2` directly I run into issues. I added a layer of software between the UI and the underlying functions so that I could make sure to turn off the Camera before turning it back on in the opposite mode and was surprised to get this output. Can someone at Apple please tell me what is going on here? For the rest of you, if anyone knows the magic incantation to safely switch camera formats, please paste that code here. Thanks. I've included my code below. func start(for deviceFormat: String) { sessionQueue.async { [unowned self] in logger.debug("Start capture session for \(deviceFormat): \(self.captureSession)") do { guard let format = formatDict[deviceFormat] else { throw Error.captureFormatNotFound } captureSession.stopRunning() captureSession.beginConfiguration() // May not be necessary. try captureDevice.lockForConfiguration() // Without this we get an error. captureDevice.activeFormat = format captureDevice.unlockForConfiguration() // Matching function: Necessary. captureSession.commitConfiguration() // Matching function: May not be necessary. captureSession.startRunning() } catch { logger.fault("Failed to start camera: \(error.localizedDescription)") errorPublisher.send(error) } } } func stop() { sessionQueue.async { [unowned self] in logger.debug("Stop capture session: \(self.captureSession)") captureSession.stopRunning() } }
Replies
0
Boosts
0
Views
902
Activity
May ’24
Core Animation Layer in SwiftUI app does not update unless I rotate the device.
I am trying to create a near real-time drawing of waveform data from within a SwiftUI app. The data is streaming in from the hardware and I've verified that the draw(in ctx: CGContext) override in my custom CALayer is getting called. I have added this custom CALayer class as a sublayer to a UIView instance that I am making available via the UIViewRepresentable protocol. The only time I see updated output from the CALayer is when I rotate the device and layout happens (I assume). How can I force SwiftUI to update every time I render new data in my CALayer? More Info: I'm porting an app from the Windows desktop. Previously, I tried to make this work by simply generating a new UIImage from a CGContext every time I wanted to update the display. I quickly exhausted memory with that technique because a new context is being created every time I call UIGraphicsImageRenderer(size:).image { context in }. What I really wanted was something equivalent to a GDI WritableBitmap. Apparently this animal allows a programmer to continuously update and re-use the contents. I could not figure out how to do this in Swift without dropping down to the old CGBitmapContext stuff written in C and even then I wasn't sure if that would give me a reusable context that I could output in SwiftUI each time I refreshed it. CALayer seemed like the answer. I welcome any feedback on a better way to do what I'm trying to accomplish.
Replies
2
Boosts
0
Views
1.3k
Activity
May ’24
How does one create a provisioning profile for embedded DEXT for iPhoneOS that is signed with a distribution cert?
I've been developing a solution that has an embedded USB driver. I can build and run my solution just fine but I cannot pass verification for uploading to App Store Correct and TestFlight The problem is that the provisioning profile I am using (for development) does not have the explicit Vendor ID (idVendor) but is using the development value of asterisk "*". I've created a release version of my entitlements file with the proper Vendor ID and I have a distribution certificate for iOS. Further, I've created a provisioning profile for app-store distribution (not development) and imported it via Xcode. When I select this provisioning profile, I get the following errors from Xcode: Xcode 14 and later requires a DriverKit development profile enabled for iOS and macOS. Visit the developer website to create or download a DriverKit profile. Provisioning profile "MyProvisioningProfile - App Store" doesn't match the entitlements file's value for the com.apple.developer.driverkit.transport.usb entitlement. If I create and use a DriverKit profile, The Xcode UI errors go away on the "Signing & Capabilities" page. However, these profiles seem to be for development only. I then get an error, during compilation, telling me that the app and extension have two different signers, one for development (DEXT) and one for distribution (App). To sum up, using a DriverKit profile fails during the build process and using a distribution profile is a non-starter for Xcode. I can't even build. What do I need to do to get this to work?
Replies
2
Boosts
0
Views
1.2k
Activity
May ’24
AVCaptureVideoPreviewLayer transforms are not working. Need pan/zoom solution.
I am implementing pan and zoom features for an app using a custom USB camera device, in iPadOS. I am using an update function (shown below) to apply transforms for scale and translation but they are not working. By re-enabling the animation I can see that the scale translation seems to initially take effect but then the image animates back to its original scale. This all happens in a fraction of a second but I can see it. The translation transform seems to have no effect at all. Printing out the value of AVCaptureVideoPreviewLayer.transform before and after does show that my values have been applied. private func updateTransform() { #if false // Disable default animation. CATransaction.begin() CATransaction.setDisableActions(true) defer { CATransaction.commit() } #endif // Apply the transform. logger.debug("\(String(describing: self.videoPreviewLayer.transform))") let transform = CATransform3DIdentity let translate = CATransform3DTranslate(transform, translationX, translationY, 0) let scale = CATransform3DScale(transform, scale, scale, 1) videoPreviewLayer.transform = CATransform3DConcat(translate, scale) logger.debug("\(String(describing: self.videoPreviewLayer.transform))") } My question is this, how can I properly implement pan/zoom for an AVCaptureVideoPreviewLayer? Or even better, if you see a problem with my current approach or understand why the transforms I am applying do not work, please share that information.
Replies
0
Boosts
0
Views
908
Activity
May ’24