Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

How to access HDRGainMap from AVCapturePhoto
Hey, I'm building a camera app and I want to use the captured HDRGainMap along side the photo to do some processing with a CIFilter chain. How can this be done? I can't find any documentation any where on this, only on how to access the HDRGainMap from an existing HEIC file, which I have done successfully. For this I'm doing something like the following: let gainmap = CGImageSourceCopyAuxiliaryDataInfoAtIndex(source, 0, kCGImageAuxiliaryDataTypeHDRGainMap) let gainDict = NSDictionary(dictionary: gainmap) let gainData = gainDict[kCGImageAuxiliaryDataInfoData] as? Data let gainDescription = gainDict[kCGImageAuxiliaryDataInfoDataDescription] let gainMeta = gainDict[kCGImageAuxiliaryDataInfoMetadata] However I'm not sure what the approach is with a AVCapturePhoto output from a AVCaptureDevice. Thanks!
2
0
698
Jan ’25
[iOS 26 bug] AVInputPickerInteraction selection immediately reverts on iOS 26
Hello everyone, I'm implementing the new AVInputPickerInteraction API on iOS 26 to allow users to select their microphone from a custom settings menu before recording. The implementation seems correct, but I'm encountering a strange issue where the input selection immediately reverts to the previous device. The Situation: The picker is presented correctly via a manual call to .present(). I can see all available inputs (e.g., "iPhone Microphone" and "AirPods"). The current input is "iPhone Microphone". I tap on "AirPods". The UI updates to show "AirPods" as selected for a fraction of a second, then immediately jumps back to "iPhone Microphone". The same thing happens in reverse. It seems like the system is automatically reverting the audio route change requested by the picker. My Implementation: My setup follows the standard pattern discussed in the WWDC sessions. Setup Code: This setup is performed once before the user can trigger the picker. @available(iOS 26.0, *) var inputPickerInteraction: AVInputPickerInteraction? // Note: The AVAudioSession is configured to .playAndRecord // and set to active elsewhere in the code before this setup is called. if #available(iOS 26.0, *) { // Setup the picker let picker = AVInputPickerInteraction() self.inputPickerInteraction = picker self.view.addInteraction(picker) // Added to establish context } Presentation Code: When a user selects "Change Input" from my custom settings menu, I call .present() on the main thread. // In a delegate method from a custom menu if #available(iOS 26.0, *) { DispatchQueue.main.async { self.inputPickerInteraction?.present(animated: true) } } What I've already checked: The AVAudioSession is active and its category is .playAndRecord. The inputPickerInteraction object is not nil. The .present() method is being called on the main thread. The picker is added to a view using view.addInteraction() in the setup phase. I've reviewed my code to ensure there is no other logic that could be manually resetting the AVAudioSession's preferred input. Has anyone else experienced this behavior? I suspect this might be a bug in the new API, but I want to make sure I'm not missing a crucial step in managing the AVAudioSession state. Any insights or potential workarounds would be greatly appreciated. Thank you.
2
0
198
Sep ’25
Video freezing with FairPlay streaming on iOS 18
Since iOS/iPadOs/tvOS 18 then we have run into a new problem with streaming of FairPlay encrypted video. On the affected streams then the audio plays perfectly but the video freezes for periods of a few seconds, so it will freeze for 5s or so, then be OK for a few seconds then freeze again. It is entirely reproducible when all the following are true the video streams were produced by a particular encoder (or particular settings, not sure on that) the video must be encrypted device is running some variety of iOS 18 (or iPadOS or tvOS) the device is an affected device Known devices are AppleTV 4K 2nd Gen iPad Pro 11" 1st and 2nd gen Devices known not to show the problem are all other AppleTV models iPhone 13 Pro and 16 Pro If we stream the same content, but unencrypted, then it plays perfectly, or if you play the encrypted stream on, say, tvOS 17. When the freezing occurs then we can see in the console logs repeating blocks of lines like the following default 18:08:46.578582+0000 videocodecd AppleAVD: AppleAVDDecodeFrameResponse(): Frame# 5771 DecodeFrame failed with error 0x0000013c default 18:08:46.578756+0000 videocodecd AppleAVD: AppleAVDDecodeFrameInternal(): failed - error: 316 default 18:08:46.579018+0000 videocodecd AppleAVD: AppleAVDDecodeFrameInternal(): avdDec - Frame# 5771, DecodeFrame failed with error: 0x13c default 18:08:46.579169+0000 videocodecd AppleAVD: AppleAVDDisplayCallback(): Asking fig to drop frame # 5771 with err -12909 - internalStatus: 315 also more relevant looking lines: default 18:17:39.122019+0000 kernel AppleAVD: avdOutbox0ISR(): FRM DONE (cid: 2.0, fno: 10970, codecT: 1) FAILED!! default 18:17:39.122155+0000 videocodecd AppleAVD: AppleAVDDisplayCallback(): Asking fig to drop frame # 10970 with err -12909 - internalStatus: 315 default 18:17:39.122221+0000 kernel AppleAVD: ## client[ 2.0] @ frm 10970, errStatus: 0x10 default 18:17:39.122338+0000 kernel AppleAVD: decodeFailIdentify(): VP error bit 4 has EP3B0 error default 18:17:39.122401+0000 kernel AppleAVD: processHWResponse(): clientID 2.0 frameNumber 10970 error 315, offsetIndex 10, isHwErr 1 So it would seem to me that one of the following must be happening: When these particular HLS files are encrypted then the data is being corrupted in some way that played back on iOS 17 and earlier but now won't on 18+, or There's a regression in iOS 18 that means that this particular format of video data is corrupted on decryption If anyone has seen similar behaviour, or has any ideas how to identify which of the two scenarios it is, please say. Unfortunately we don't have control of the servers so can't make changes there unless we can identify they are definitely the cause of the problem. Thanks, Simon.
2
0
725
Sep ’25
MusicKit in playgrounds not supported?
Has anyone been able to successfully use MusicCatalogSearchRequest in a playgrounds app? I have configured my playground similar to a regular app: app id with automatic music token generation turned on, music access authorized within the app itself, but whenever I query MusicCatalogSearchRequest I get an error thrown with .developerTokenRequestFailed. Considering musickit is restricted in the sim, it would not surprise me if it was the same in playgrounds but it would be super helpful if I could prototype with musickit in playgrounds 4!
2
1
1.3k
Apr ’25
The behavior of AVPlayerItem.didPlayToEndTimeNotification is not as expected in iOS 26.
Hello, Environment macOS 15.6.1 / Xcode 26 beta 7 / iOS 26 Beta 9 In a simple AVFoundation video-playback sample, I’m seeing different behavior between iOS 18 and iOS 26 regarding AVPlayerItem.didPlayToEndTimeNotification. I’ve attached a minimal sample below. Please replace videoURL with a valid short video URL. Repro steps Tap “Play” to start playback and let the video finish. The AVPlayerItem.didPlayToEndTimeNotification registered with NotificationCenter should fire, and you should see Play finished. in the console. Without relaunching, tap “Play” again. This is where the issue arises. Observed behavior On iOS 18 and earlier: The video does not play again (it does not restart from the beginning), but AVPlayerItem.didPlayToEndTimeNotification is posted and Play finished. appears in the console. The same happens every time you press “Play”. On iOS 26: Pressing “Play” does not post AVPlayerItem.didPlayToEndTimeNotification. The code path that prints Play finished. is never called (the callback enclosing that line is not invoked again). Building the same program with Xcode 16.4 and running it on an iOS 26 beta device shows the same phenomenon, which suggests there has been a behavioral change for AVPlayerItem.didPlayToEndTimeNotification on iOS 26. I couldn’t find any mention of this in the release notes or API Reference. Because the semantics around AVPlayerItem.didPlayToEndTimeNotification appear to differ, we’re forced to adjust our logic. If there is a way to achieve the iOS 18–style behavior on iOS 26, I would appreciate guidance. Alternatively, if this change is intentional, could you share the reasoning? Is iOS 26 the correct behavior from Apple’s perspective and iOS 18 (and earlier) behavior considered incorrect? Any official clarification would be extremely helpful. import UIKit import AVFoundation final class ViewController: UIViewController { private let videoURL = URL(string: "https://......mp4")! private var player: AVPlayer? private var playerItem: AVPlayerItem? private var playerLayer: AVPlayerLayer? private var observeForComplete: NSObjectProtocol? // UI private let playerContainerView = UIView() private let playButton = UIButton(type: .system) private let stopButton = UIButton(type: .system) private let replayButton = UIButton(type: .system) deinit { if let observeForComplete { NotificationCenter.default.removeObserver(observeForComplete) } } override func viewDidLoad() { super.viewDidLoad() view.backgroundColor = .systemBackground setupUI() setupPlayer() } override func viewDidLayoutSubviews() { super.viewDidLayoutSubviews() playerLayer?.frame = playerContainerView.bounds } // MARK: - Setup private func setupUI() { playerContainerView.translatesAutoresizingMaskIntoConstraints = false playerContainerView.backgroundColor = .black view.addSubview(playerContainerView) // Buttons playButton.setTitle("Play", for: .normal) stopButton.setTitle("Pause", for: .normal) replayButton.setTitle("RePlay", for: .normal) [playButton, stopButton, replayButton].forEach { $0.titleLabel?.font = .systemFont(ofSize: 16, weight: .semibold) $0.translatesAutoresizingMaskIntoConstraints = false $0.contentEdgeInsets = UIEdgeInsets(top: 10, left: 16, bottom: 10, right: 16) } let stack = UIStackView(arrangedSubviews: [playButton, stopButton, replayButton]) stack.axis = .horizontal stack.spacing = 16 stack.alignment = .center stack.distribution = .equalCentering stack.translatesAutoresizingMaskIntoConstraints = false view.addSubview(stack) NSLayoutConstraint.activate([ playerContainerView.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor, constant: 20), playerContainerView.leadingAnchor.constraint(equalTo: view.leadingAnchor), playerContainerView.trailingAnchor.constraint(equalTo: view.trailingAnchor), playerContainerView.heightAnchor.constraint(equalToConstant: 200), stack.topAnchor.constraint(equalTo: playerContainerView.bottomAnchor, constant: 20), stack.centerXAnchor.constraint(equalTo: view.centerXAnchor) ]) // Action playButton.addTarget(self, action: #selector(didTapPlay), for: .touchUpInside) stopButton.addTarget(self, action: #selector(didTapStop), for: .touchUpInside) replayButton.addTarget(self, action: #selector(didTapReplayFromStart), for: .touchUpInside) } private func setupPlayer() { // AVURLAsset -> AVPlayerItem → AVPlayer let asset = AVURLAsset(url: videoURL) let item = AVPlayerItem(asset: asset) self.playerItem = item let player = AVPlayer(playerItem: item) player.automaticallyWaitsToMinimizeStalling = true self.player = player let layer = AVPlayerLayer(player: player) layer.videoGravity = .resizeAspect playerContainerView.layer.addSublayer(layer) layer.frame = playerContainerView.bounds self.playerLayer = layer // Notification if let observeForComplete { NotificationCenter.default.removeObserver(observeForComplete) } if let playerItem { observeForComplete = NotificationCenter.default.addObserver( forName: AVPlayerItem.didPlayToEndTimeNotification, object: playerItem, queue: .main ) { [weak self] _ in guard self != nil else { return } Task { @MainActor in print("Play finished.") } } } } // MARK: - Actions @objc private func didTapPlay() { player?.play() } @objc private func didTapStop() { player?.pause() } // RePlay @objc private func didTapReplayFromStart() { player?.seek(to: .zero, toleranceBefore: .zero, toleranceAfter: .zero) { [weak self] _ in self?.player?.play() } } } I would greatly appreciate an official response from Apple engineering on whether this is an intentional change, a regression, or an API contract clarification, and what the recommended approach is going forward. Thank you.
2
3
692
Sep ’25
save audio file in iOS 18 instead of iOS 12
I'm able to get text to speech to audio file using the following code for iOS 12 iPhone 8 to create a car file: audioFile = try AVAudioFile( forWriting: saveToURL, settings: pcmBuffer.format.settings, commonFormat: .pcmFormatInt16, interleaved: false) where pcmBuffer.format.settings is: [AVAudioFileTypeKey: kAudioFileMP3Type, AVSampleRateKey: 48000, AVEncoderBitRateKey: 128000, AVNumberOfChannelsKey: 2, AVFormatIDKey: kAudioFormatLinearPCM] However, this code does not work when I run the app in iOS 18 on iPhone 13 Pro Max. The audio file is created, but it doesn't sound right. It has a lot of static and it seems the speech is very low pitch. Can anyone give me a hint or an answer?
2
0
108
Mar ’25
AVCaptureMetadataOutput .face detection not working on iOS 26 Beta with high sessionPreset
In iOS 26 (Developer Beta), the AVCaptureMetadataOutputObjectsDelegate no longer receives callbacks when metadataOutput.metadataObjectTypes = [.face] is set. On earlier iOS versions the issue does not occur. Interestingly, face detection works if I set the sessionPreset to .medium, but not with .high — except on the iPhone 16 Pro Max, where it works regardless.
2
1
358
Sep ’25
Threading guarantees with AVCaptureVideoDataOutput
I'm writing some camera functionality that uses AVCaptureVideoDataOutput. I've set it up so that it calls my AVCaptureVideoDataOutputSampleBufferDelegate on a background thread, by making my own dispatch_queue and configuring the AVCaptureVideoDataOutput. My question is then, if I configure my AVCaptureSession differently, or even stop it altogether, is this guaranteed to flush all pending jobs on my background thread? For example, does [AVCaptureSession stopRunning] imply a blocking call until all pending frame-callbacks are done? I have a more practical example below, showing how I am accessing something from the foreground thread from the background thread, but I wonder when/how it's safe to clean up that resource. I have setup similar to the following: // Foreground thread logic dispatch_queue_t queue = dispatch_queue_create("qt_avf_camera_queue", nullptr); AVCaptureSession *captureSession = [[AVCaptureSession alloc] init]; setupInputDevice(captureSession); // Connects the AVCaptureDevice... // Store some arbitrary data to be attached to the frame, stored on the foreground thread FrameMetaData frameMetaData = ...; MySampleBufferDelegate *sampleBufferDelegate = [MySampleBufferDelegate alloc]; // Capture frameMetaData by reference in lambda [sampleBufferDelegate setFrameMetaDataGetter: [&frameMetaData]() { return &frameMetaData; }]; AVCaptureVideoDataOutput *captureVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init]; [captureVideoDataOutput setSampleBufferDelegate:sampleBufferDelegate queue:queue]; [captureSession addOutput:captureVideoDataOutput]; [captureSession startRunning]; [captureSession stopRunning]; // Is it now safe to destroy frameMetaData, or do we need manual barrier? And then in MySampleBufferDelegate: - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // Invokes the callback set above FrameMetaData *frameMetaData = frameMetaDataGetter(); emitSampleBuffer(sampleBuffer, frameMetaData); }
2
0
425
Sep ’25
AVAudioSession.outputVolume not reporting correctly in iOS 18+ devices
I’m using the shared instance of AVAudioSession. After activating it with .setActive(true), I observe the outputVolume, and it correctly reports the device’s volume. However, after deactivating the session using .setActive(false), changing the volume, and then reactivating it again, the outputVolume returns the previous volume (before deactivation), not the current device volume. The correct volume is only reported after the user manually changes it again using physical buttons or Control Center, which triggers the observer. What I need is a way to retrieve the actual current device volume immediately after reactivating the audio session, even on the second and subsequent activations. Disabling and re-enabling the audio session is essential to how my application functions. I’ve tested this behavior with my colleagues, and the issue is consistently reproducible on iOS 18.0.1, iOS 18.1, iOS 18.3, iOS 18.5 and iOS 18.6.2. On devices running iOS 17.6.1 and iOS 16.0.3, outputVolume correctly reflects the current volume immediately after calling .setActive(true) multiple times.
2
0
185
Sep ’25
AudioQueue Output fails playing audio almost immediately?
On macOS Sequoia, I'm having the hardest time getting this basic audio output to work correctly. I'm compiling in XCode using C99, and when I run this, I get audio for a split second, and then nothing, indefinitely. Any ideas what could be going wrong? Here's a minimum code example to demonstrate: #include <AudioToolbox/AudioToolbox.h> #include <stdint.h> #define RENDER_BUFFER_COUNT 2 #define RENDER_FRAMES_PER_BUFFER 128 // mono linear PCM audio data at 48kHz #define RENDER_SAMPLE_RATE 48000 #define RENDER_CHANNEL_COUNT 1 #define RENDER_BUFFER_BYTE_COUNT (RENDER_FRAMES_PER_BUFFER * RENDER_CHANNEL_COUNT * sizeof(f32)) void RenderAudioSaw(float* outBuffer, uint32_t frameCount, uint32_t channelCount) { static bool isInverted = false; float scalar = isInverted ? -1.f : 1.f; for (uint32_t frame = 0; frame < frameCount; ++frame) { for (uint32_t channel = 0; channel < channelCount; ++channel) { // series of ramps, alternating up and down. outBuffer[frame * channelCount + channel] = 0.1f * scalar * ((float)frame / frameCount); } } isInverted = !isInverted; } AudioStreamBasicDescription coreAudioDesc = { 0 }; AudioQueueRef coreAudioQueue = NULL; AudioQueueBufferRef coreAudioBuffers[RENDER_BUFFER_COUNT] = { NULL }; void coreAudioCallback(void* unused, AudioQueueRef queue, AudioQueueBufferRef buffer) { // 0's here indicate no fancy packet magic AudioQueueEnqueueBuffer(queue, buffer, 0, 0); } int main(void) { const UInt32 BytesPerSample = sizeof(float); coreAudioDesc.mSampleRate = RENDER_SAMPLE_RATE; coreAudioDesc.mFormatID = kAudioFormatLinearPCM; coreAudioDesc.mFormatFlags = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked; coreAudioDesc.mBytesPerPacket = RENDER_CHANNEL_COUNT * BytesPerSample; coreAudioDesc.mFramesPerPacket = 1; coreAudioDesc.mBytesPerFrame = RENDER_CHANNEL_COUNT * BytesPerSample; coreAudioDesc.mChannelsPerFrame = RENDER_CHANNEL_COUNT; coreAudioDesc.mBitsPerChannel = BytesPerSample * 8; coreAudioQueue = NULL; OSStatus result; // most of the 0 and NULL params here are for compressed sound formats etc. result = AudioQueueNewOutput(&coreAudioDesc, &coreAudioCallback, NULL, 0, 0, 0, &coreAudioQueue); if (result != noErr) { assert(false == "AudioQueueNewOutput failed!"); abort(); } for (int i = 0; i < RENDER_BUFFER_COUNT; ++i) { uint32_t bufferSize = coreAudioDesc.mBytesPerFrame * RENDER_FRAMES_PER_BUFFER; result = AudioQueueAllocateBuffer(coreAudioQueue, bufferSize, &(coreAudioBuffers[i])); if (result != noErr) { assert(false == "AudioQueueAllocateBuffer failed!"); abort(); } } for (int i = 0; i < RENDER_BUFFER_COUNT; ++i) { RenderAudioSaw(coreAudioBuffers[i]->mAudioData, RENDER_FRAMES_PER_BUFFER, RENDER_CHANNEL_COUNT); coreAudioBuffers[i]->mAudioDataByteSize = coreAudioBuffers[i]->mAudioDataBytesCapacity; AudioQueueEnqueueBuffer(coreAudioQueue, coreAudioBuffers[i], 0, 0); } AudioQueueStart(coreAudioQueue, NULL); sleep(10); // some time to hear the audio AudioQueueStop(coreAudioQueue, true); AudioQueueDispose(coreAudioQueue, true); return 0; }
2
0
432
Sep ’25
iPad app on macOS not asking for microphone permission
Hello, I have an iOS app that is recording audio that is working fine on iPads/iPhones. It asks for microphone permission and after that recording works. I installed the same app on my M3 MacBook via TestFlight since iPad apps are supposed to work without a change that way. The app starts fine and everything, but it never asks for Microphone permission, so I can't record. Do I need to do something to make this happen (this is not macCatalyst, its running the arm64 iPhone binary on macOS) thanks
2
1
788
Mar ’25
dlsym cannot find symbol g_dwILResult when debugging an audio plugin
I am trying to debug the AAX version of my plugin (MIDI effect) on Pro Tools, but I am getting the following error (Mac console) when attempting to load it: dlsym cannot find symbol g_dwILResult in CFBundle etc.. I used Xcode 16.4 to build the plugin. Has anybody come across the same or a similar message? Best, Achillefs Axart Labs
2
0
503
Sep ’25
Questions about H.264 encoding settings with low-latency rate control
Hi everyone, I noticed that Apple recently added a few new beta sample codes related to video encoding: Encoding video for low-latency conferencing Encoding video for live streaming While experimenting with H.264 encoding, I came across some questions regarding certain configurations: When I enable kVTVideoEncoderSpecification_EnableLowLatencyRateControl, is it still possible to use kVTCompressionPropertyKey_VariableBitRate? In my tests, I get an error. It also seems that kVTVideoEncoderSpecification_EnableLowLatencyRateControl cannot be used together with kVTCompressionPropertyKey_ConstantBitRate when encoding H264. Is that expected? When using kVTCompressionPropertyKey_ConstantBitRate with kVTCompressionPropertyKey_MaxKeyFrameInterval set to 2, the encoder outputs only keyframes, and the frame size keeps increasing, which doesn’t seem like the intended behavior. Regarding the following code from the sample: let byteLimit = (Double(bitrate) / 8) * 1.5 as CFNumber let secLimit = Double(1.0) as CFNumber let limitsArray = [ byteLimit, secLimit ] as CFArray // Each 1 second limit byte as bitrate err = VTSessionSetProperty(session, key: kVTCompressionPropertyKey_DataRateLimits, value: limitsArray) This DataRateLimits setting doesn’t seem to have any effect in my tests. Whether I set it or not, the values remain unchanged. Since the documentation on developer.apple.com/documentation doesn’t clearly explain these cases, I’d like to ask if anyone has insights or recommendations about the proper usage of these settings. Thanks in advance!
2
0
517
Sep ’25
"Baking together" two audio tracks into one for drag-and-drop
Hi all, with my app ScreenFloat, you can record your screen, along with system- and microphone audio. Those two audio feeds are recorded into separate audio tracks in order to individually remove or edit them later on. Now, these recordings you create with ScreenFloat can be drag-and-dropped to other apps instantly. So far, so good, but some apps, like Slack, or VLC, or even websites like YouTube, do not play back multiple audio tracks, just one. So what I'm trying to do is, on dragging the video recording file out of ScreenFloat, instantly baking together the two individual audio tracks into one, and offering that new file as the drag and drop file, so that all audio is played in the target app. But it's slow. I mean, it's actually quite fast, but for drag and drop, it's slow. My approach is this: "Bake together" the two audio tracks into a one-track m4a audio file using AVMutableAudioMix and AVAssetExportSession Take the video track, add the new audio file as an audio track to it, and render that out using AVAssetExportSession For a quick benchmark, a 3'40'' movie, step 1 takes ~1.7 seconds, and step two adds another ~1.5 seconds, so we're at ~3.2 seconds. That's an eternity for a drag and drop, where the user might cancel if there's no immediate feedback. I could also do it in one step, but then I couldn't use the AV*Passthrough preset, and that makes it take around 32 seconds then, because I assume it touches the video data (which is unnecessary in this case, so I think the two-step approach here is the fastest). So, my question is, is there a faster way? The best idea I can come up with right now is, when initially recording the screen with system- and microphone audio as separate tracks, to also record both of them into a third, muted, "hidden" track I could use later on, basically eliminating the need for step one and just ripping the two single audio tracks out of the movie and only have the video and the "hidden" track (then unmuted), but I'd still have a ~1.5 second delay there. Also, there's the processing and data overhead (basically doubling the movie's audio data). All this would be great for an export operation (where one expects it to take a little time), but for a drag-and-drop operation, it's not ideal. I've discarded the idea of doing a promise file drag, because many apps do not accept those, and I want to keep wide compatibility with all sorts of apps. I'd appreciate any ideas or pointers. Thank you kindly, Matthias
2
0
659
Mar ’25
Convert CoreAudio AudioObjectID to IOUSB LocationID
Is there a recommended way on macOS 26 Tahoe to take a CoreAudio AudioObjectID and use it to lookup the underlying USB LocationID? I previously used AudioObjectID to query the corresponding DeviceUID with kAudioDevicePropertyDeviceUID. Then I queried for the IOService matching kIOAudioEngineClassName with property kIOAudioEngineGlobalUniqueIDKey matching DeviceUID, and I loaded kUSBDevicePropertyLocationID from the result. This fails on macOS 26, because the IO Registry for the device has an entry for usbaudiod rather than AppleUSBAudioEngine, and usbaudiod does not include a kIOAudioEngineGlobalUniqueIDKey property (or any other property to map it to a CoreAudio DeviceUID). My use-case here is a piece of audio recording software that allows configuring a set of supported audio devices via USB HID prior to recording. I present the user with a list of CoreAudio devices to use, but without a way to lookup the underlying USB LocationID, I cannot guarantee that the configured device matches the selected device (e.g. if the user plugged in two identical microphones).
2
0
519
Sep ’25
CheckError.swift:CheckError(_:):211:kAudioUnitErr_InvalidParameter (CheckError.swift:CheckError(_:):211)
I'm getting this error when I launch my application on the iPhone 14 Pro via Xcode. Everything builds OK. I"m using the audio kit plugin and Sound Pipe Audiokit. The error starts as soon as I start the app and will carry on repeatedly. I have background processing turned on as I'd like the sounds to play when the phone is locked via the headphones. I can't find anything online about this error. None of my catches are printing anything in the logs either. So I don't know if this is just something that pops up repeatedly or whether there is something fundamentally wrong. private func setupAudioSession() { do { let session = AVAudioSession.sharedInstance() try session.setCategory(.playback, mode: .default, options: [.mixWithOthers]) try session.setActive(true, options: .notifyOthersOnDeactivation) } catch { errorMessage = "Failed to set up audio session: (error.localizedDescription)" print(errorMessage ?? "") } } // MARK: - Background Task Handling private func setupBackgroundTaskHandling() { // Handle app entering background notificationObservers.append( NotificationCenter.default.addObserver( forName: UIApplication.didEnterBackgroundNotification, object: nil, queue: .main, using: { [weak self] _ in // Safely unwrap self guard let self = self else { return } self.handleBackgroundTransition() } ) ) I'm not sure if this is the code causing the issue. Any help would be gratefully appreciated. This is my first app I'm working on .
2
2
154
Apr ’25
Looking for solutions to build a video chat app (Omegle/Chatroulette style) on Vision Pro
Hi everyone, We are working on a prototype app for Apple Vision Pro that is similar in functionality to Omegle or Chatroulette, but exclusively for Vision Pro owners. The core idea is: – a matching system where one user connects to another through a virtual persona; – real-time video and audio transmission; – time limits for sessions with the ability to extend them; – users can skip a match and move on to the next one. We have explored WebRTC and Twilio, but unfortunately, they don’t fit our use case. Question: What alternative services or SDKs are available for implementing real-time video/audio communication on Vision Pro that would work with this scenario? Has anyone encountered a similar challenge and can recommend which technologies or tools to use? Thanks in advance!
2
0
262
Aug ’25
MusicKit Web Playback States
In MusicKit Web the playback states are provided as numbers. For example the playbackStateDidChange event listener will return: {oldState: 2, state: 3, item:...} When the state changes from playing (2) to paused (3). Those are pretty easy to guess, but I'm having a hard time with some of the others: completed, ended, loading, none, paused, playing, seeking, stalled, stopped, waiting. I cannot find a mapping of states to numbers documented anywhere. I got the above states from an enum in a d.ts file that is often incorrect/incomplete. Can someone help out pointing to the docs or provide a mapping? Thanks.
2
0
400
Feb ’25