Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Created

AVAudioEngine startAndReturnError is now failing
I have a keyboard in my iOS Morse Code app that has always been able to play audio via AVAudioEngine. Recently it has been failing to produce audio. I see that startAndReturnError: is now failing with this error: Error Domain=com.apple.coreaudio.avfaudio Code=268435459 "(null)" UserInfo={failed call=err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)} What's going on? Have keyboards lost the ability to play audio? Here's how I set things up: _engine = [AVAudioEngine new]; _prefs = [[NSUserDefaults alloc] initWithSuiteName:kSharedAppGroupID]; AVAudioMixerNode* mainMixerNode = _engine.mainMixerNode; AVAudioOutputNode* outputNode = _engine.outputNode; AVAudioFormat* format = [outputNode inputFormatForBus:0]; AVAudioFormat* inputFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32 sampleRate:44100 channels:1 interleaved:NO]; self.srcNode = [[AVAudioSourceNode alloc] initWithRenderBlock:^OSStatus(BOOL* _Nonnull isSilence, const AudioTimeStamp* _Nonnull timestamp, AVAudioFrameCount frameCount, AudioBufferList* _Nonnull outputData) { // This block builds the data, but is never called, so it is not the culprit. }]; [_engine attachNode:self.srcNode]; [_engine connect:self.srcNode to:mainMixerNode format:inputFormat]; [_engine connect:mainMixerNode to:_engine.outputNode format:nil]; [_engine prepare];
0
0
73
5d
macOS 26 – NSSound/CoreAudio causes SIGILL crash in caulk allocator
Hi everyone, We are the engineering team behind an enterprise communications application for macOS. We are experiencing a critical crash on macOS 26 that did not occur on any previous macOS version. We are seeking clarification from Apple engineers or anyone who may have insight into this behaviour. Environment Architecturex86_64macOS26.4.1 (25E253)HardwareMac15,13 (MacBook Pro)ExceptionSIGILL / ILL_ILLOPCCrashed ThreadThread 0 (Main Thread)TriggerPlaying a notification sound via NSSound during an incoming call Crash Stack 0 caulk consolidating_free_map::maybe_create_free_node + 119 ← SIGILL 1 caulk tiered_allocator + 1469 2 caulk exported_resource::do_allocate + 15 3 AudioToolboxCore EABLImpl::create + 204 4 CoreAudio AUNotQuiteSoSimpleTimeFactory + 33267 8 AudioToolboxCore AudioUnitInitialize + 189 9 AudioToolbox XAudioUnit::Initialize + 19 10 AudioToolbox MESubmixGraph::initialize + 125 11 AudioToolbox MESubmixGraph::connectInputChannel + 1172 12 AudioToolbox MEDeviceStreamClient::AddRunningClient + 509 15 AudioToolbox AudioQueueObject::StartRunning + 194 16 AudioToolbox AudioQueueObject::Start + 1447 22 AudioToolbox AQ::API::V2Impl::AudioQueueStartWithFlags + 805 23 AVFAudio AVAudioPlayerCpp::playQueue + 354 24 AVFAudio AVAudioPlayerCpp::DoAction + 134 25 AVFAudio -[AVAudioPlayer play] + 26 26 AppKit -[NSSound play] + 100 27 Our App -[AudioHelper tryToStartSound:ofType:] + 569 28 Our App block_invoke + 59 Behaviour Difference Between macOS Versions The exact same code path that triggers this crash on macOS 26 works without any issue on macOS 14 and macOS 15 — no crash, no warning, no log output of any kind. The crash occurs inside Apple's private caulk memory allocator during CoreAudio audio engine initialisation, triggered by a call to [NSSound play]. The SIGILL / ILL_ILLOPC at maybe_create_free_node + 119 suggests a hard ud2 trap — an intentional abort guard inserted at compile time. This strongly suggests that something changed in macOS 26 within NSSound / CoreAudio / caulk that causes this code path to fail in a way it previously did not. Questions We have the following specific questions: Was there a deliberate threading policy change in NSSound / CoreAudio in macOS 26? Is the SIGILL in caulk::consolidating_free_map::maybe_create_free_node an intentional thread-affinity assertion introduced in macOS 26? Are there any other NSSound / AVAudioPlayer / AudioQueue APIs that have similarly tightened their requirements in macOS 26 that we should be aware of? Is there a migration guide, release note, or WWDC session that covers CoreAudio changes in macOS 26 that we may have missed? Has anyone else in the developer community encountered a similar SIGILL crash in caulk on macOS 26 during audio playback?
3
0
581
5d
Entitlement "com.apple.developer.carplay-driving-task" not allowing audio playback for voice controlled interaction
According to https://developer.apple.com/download/files/CarPlay-Developer-Guide.pdf , apps with entitlement com.apple.developer.carplay-driving-task are allowed to use voice control. In my current implementation the voice recording working fine but the voice response (AVPlayer with category "playback set") does not output any audio. I suspect that it is a entitlement limitation because if I quickly tap to play a music while the voice assistant AVPlayer is "playing", then I can hear the response, but without this trick it stays playing but mute. In parallel I have now requested com.apple.developer.carplay-voice-based-conversation entitlement , but I don't even know if when approved I will be able to use 2 entitlement for the same CarPlay app. Long story short: 1 - Should an app be able to play audio responses when it's CarPlay entitlement is com.apple.developer.carplay-driving-task? 2 - If not, can I combine entitlements com.apple.developer.carplay-driving-task and com.apple.developer.carplay-voice-based-conversation?
0
0
245
1w
MusicKit developer token returns 401 on all catalog endpoints
My MusicKit developer token returns 401 (empty body) on every Apple Music API catalog endpoint. I've tried two different keys — both fail identically. Setup: Team ID: K79RSBVM9G Key ID: URNQV5UDGB (MusicKit enabled, associated with Media ID media.audio.explore.musickit) Apple Developer Program License Agreement accepted April 14, 2026 Token format (matches docs exactly): Header: {"alg":"ES256","kid":"URNQV5UDGB"} Payload: {"iss":"K79RSBVM9G","iat":,"exp":<now+15777000>} What works: /v1/storefronts/us returns 200 What fails: Every catalog endpoint returns 401 with empty body: /v1/catalog/us/search?types=artists&term=test /v1/catalog/us/artists/5920832 /v1/catalog/us/genres /v1/test The token self-verifies (signature is valid). I've tried with and without typ:"JWT", with the origin claim, and with a manually signed JWT bypassing the jsonwebtoken library. Same 401 every time. What am I missing?
0
1
168
1w
Is Push to Talk appropriate for a voice-based interactive assistant (not a walkie-talkie app)?
Hello, Looking for guidance from Apple engineers or developers who have used Push to Talk in production I am developing an iOS application called Companion AI / Theo Voice, designed for elderly users. The goal of the app is to provide a simple, voice-first interactive assistant that enables: natural voice interaction (no typing required) daily assistance (reminders, well-being, conversation) bidirectional voice communication (the user can immediately respond by voice) ⸻ How it works The app operates in two main modes: Conversation mode the user opens the app the assistant speaks the user replies naturally by voice Proactive mode in specific useful situations (e.g. medication reminders, check-ins) the app initiates a voice interaction the user can respond immediately ⸻ Important constraints there is no continuous listening the microphone is only active during interactions users can disable proactive interactions frequency is limited and user-controlled ⸻ Question We are considering using the Push to Talk framework in order to: allow the app to be awakened in the background initiate a voice interaction enable immediate voice response from the user Would this usage be considered aligned with the intended use of Push to Talk? Are there any specific recommendations to ensure compliance with App Store Review Guidelines? Thank you very much for your guidance.
0
0
160
1w
Bug: Channels erroneously populated when sending audio from an iPhone to a linux gadget audio device.
I have a device which is using linux gadget audio to receive audio input via USB, exposing 24 capture channels. This device works well with Mac, Windows, and Android phones. However, when sending audio from an iPhone (both USB-C iPhones and lightning iPhones using an official Apple lightning -> usb adaptor) I am seeing strange behaviour. Audio which is sent from the iPhone to any one of inputs 12, 19, 20, 21, or 22 appears in all of those channels, rather than only the channel to which audio is routed. I have confirmed on my linux device that these channels are not being erroneously populated by the software running on that device; the issue is visible in audio recorded directly from the gadget using arecord, meaning it is present in the audio being sent from the iPhone. I have confirmed that the gadget channel mask is correct for 24 channel audio (0xFFFFFF). As said above, audio routed to this device from any non-iPhone device (Mac, Windows, Android) works fine. The only sensible conclusion seems to be that the iPhone is populating the additional channels erroneously due to some bug in CoreAudio's handling of gadget audio devices. I would appreciate any insight on this from Apple developers, or from anyone else who has come across this issue and found a workaround.
0
0
253
2w
tvOS: Background audio + local caching works on Simulator but stops on real Apple TV device
Description: I’m developing a tvOS app using SwiftUI where we play background audio (music) in the Welcome screen, with support for offline playback via local caching. Feature Overview: App fetches audio metadata from API Starts streaming audio (HLS .m3u8) immediately In parallel, downloads the raw audio file (.mp3) Once download completes: Switches playback from streaming → local file On next launch (offline mode), app plays audio from local storage Issue: This flow works perfectly on the Simulator, but on a real Apple TV device: Audio plays for a few seconds (2–5 sec) and then stops Especially after switching from streaming → local file No explicit AVPlayer error is logged Playback sometimes stops after UI updates or periodic API refresh Implementation Details: Using AVPlayer with AVPlayerItem Background audio controlled via a shared manager (singleton) Files stored locally using FileManager (currently using .cachesDirectory) Switching playback using: player.replaceCurrentItem(with: AVPlayerItem(url: localURL)) player.play() Observations: Works reliably on Simulator On device: -- Playback stops silently -- Seems related to lifecycle, buffering, or file access No issues when continuously streaming (without switching to local) Questions: Is there any limitation or known issue with AVPlayer when switching from streaming (HLS) to local file playback on tvOS? Are there specific requirements for playing locally cached media files on tvOS (e.g., file location, permissions, or sandbox behavior)? What is the recommended storage location and size limit for cached media files on tvOS? We understand tvOS has limited persistent storage Is .cachesDirectory the correct approach for this use case? Are there known differences in AVPlayer behavior between Simulator and real Apple TV devices (especially regarding buffering or lifecycle)? What is the recommended approach for implementing offline background audio on tvOS apps? Goal: We want to implement a reliable system where: Audio streams initially Seamlessly switches to local file after download Continues playing without interruption Supports offline playback on subsequent launches Any guidance or best practices would be greatly appreciated. Thank you!
0
0
144
2w
tvOS: Background audio + local caching works on Simulator but stops on real Apple TV device
Description: I’m developing a tvOS app using SwiftUI where we play background audio (music) in the Welcome screen, with support for offline playback via local caching. 🔹 Feature Overview App fetches audio metadata from API Starts streaming audio (HLS .m3u8) immediately In parallel, downloads the raw audio file (.mp3) Once download completes: Switches playback from streaming → local file On next launch (offline mode), app plays audio from local storage 🔹 Issue This flow works perfectly on the Simulator, but on a real Apple TV device: Audio plays for a few seconds (2–5 sec) and then stops Especially after switching from streaming → local file No explicit AVPlayer error is logged Playback sometimes stops after UI updates or periodic API refresh 🔹 Implementation Details Using AVPlayer with AVPlayerItem Background audio controlled via a shared manager (singleton) Files stored locally using FileManager (currently using .cachesDirectory) Switching playback using: player.replaceCurrentItem(with: AVPlayerItem(url: localURL)) player.play() 🔹 Observations Works reliably on Simulator On device: Playback stops silently Seems related to lifecycle, buffering, or file access No issues when continuously streaming (without switching to local) 🔹 Questions Is there any limitation or known issue with AVPlayer when switching from streaming (HLS) to local file playback on tvOS? Are there specific requirements for playing locally cached media files on tvOS (e.g., file location, permissions, or sandbox behavior)? What is the recommended storage location and size limit for cached media files on tvOS? We understand tvOS has limited persistent storage Is .cachesDirectory the correct approach for this use case? Are there known differences in AVPlayer behavior between Simulator and real Apple TV devices (especially regarding buffering or lifecycle)? What is the recommended approach for implementing offline background audio on tvOS apps? 🔹 Goal We want to implement a reliable system where: Audio streams initially Seamlessly switches to local file after download Continues playing without interruption Supports offline playback on subsequent launches Any guidance or best practices would be greatly appreciated. Thank you!
1
0
221
2w
Issues with monitoring and changing WebRTC audio output device in WKWebView
I am developing a VoIP app that uses WebRTC inside a WKWebView. Question 1: How can I monitor which audio output device WebRTC is currently using? I want to display this information in the UI for the user . Question 2: How can I change the current audio output device for WebRTC? I am using a JS Bridge to Objective-C code, attempting to change the audio device with the following code: void set_speaker(int n) { session = [AVAudioSession sharedInstance]; NSError *err = nil; if (n == 1) { [session overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:&err]; } else { [session overrideOutputAudioPort:AVAudioSessionPortOverrideNone error:&err]; } } However, this approach does not work. I am testing on an iPhone with iOS 16.7. Is a higher iOS version required?
1
0
230
3w
CarPlay: Voice Conversational Entitlement Details
With the Voice Conversational Entitlement, can a CarPlay app establish a turn-based audio interface that operates in two modes: Speaking mode: Audio Session configured for playback Buffered audio Listening mode: Switch Audio Session to .record or .playAndRecord Activate SFSpeechRecognizer And continue toggling back and forth. The app should listen for responses to questions or other audio cues, and assuming those answers are correct (based on analysis of results from SFSpeechRecognizer), continue this pattern of mode 1 and 2 alternating. This appears to be a valid use of this entitlement. Does this also require the Audio App Entitlement, or is the Voice Conversational Entitlement sufficient? Are there other obstacles to this type of app that I'm not seeing? Or perhaps this is technically possible, but unlikely to pass app store review?
0
0
198
4w
Android MusicKit canSetRadioLikeState and setRadioLikeState
The Android MusicKit documentation documents two functions that are not actually exposed/added to the SDK. https://developer.apple.com/musickit/android/com/apple/android/music/playback/controller/MediaPlayerController.html#canSetRadioLikeState-- https://developer.apple.com/musickit/android/com/apple/android/music/playback/controller/MediaPlayerController.html#setRadioLikeState-int- Is the documentation stale or is the SDK out of date?
0
0
148
4w
CarPlay outputs no audio
I have an application that includes custom artwork for the album cover and text details setup with the MPRemoteCommandCenter.shared() reference. I need the user to have a full featured "now playing" display to see all of this. My experience is that cannot find a set of parameters for AVAudioSession.setCategory() that route audio successfully, and display the full featured now playing deck. If I use .playAndRecord, the audio I send out plays out on the radio. But, the now-playing deck is empty and nothing I do with the command center seems to change that. If I instead use .playback, I cannot use .defaultToSpeaker option which is the only way I've found to cause the "now-playing" navigation button to appear so that the full featured deck will display. But, of course setCategory() fails with an error about .defaultToSpeaker only available with .playAndRecord, so some default or intermediate state is entered and I see the full featured deck, but no audio goes out to the radio. What combination is supposed to be used here and is this more likely a problem with thread use (@MainActor) and/or some ordering of operations that I've overlooked?
3
0
198
Apr ’26
Is iTunesTagging no longer support?
I'm currently trying to develope ipod control function on IVI for vehicle. From previous experience I remember we need to implement iTunetagging, but since I can't find it in Accessory Firmware Specification R46, I'm wondering whether iTunesTagging is no longer support. Thanks in advance for you support!
0
0
75
Apr ’26
Does the OS has dedicated volume levels for each AVAudioSessionCategory.
We have an VoiP application, our application can be configured to amplify the PCM samples before feeding it to the Player to achieve volume gain at the receiver. In order to support this, We follow as below. If User has configured this Gain Settings within application, Application applies the amplification for the samples to introduce the gain. Application will also set the AVAudioSessionCategory to AVAudioSessionCategoryPlayback Provided the User has chosen the output to Speaker. This settings was working for us but we see there is a difference in behaviour w.r.t Volume Level System Settings between OS 26.3.1 and OS 26.4 When user has chosen earpiece as Output, then we will set the AVAudioSessionCategory to AVAudioSessionCategoryPlayAndRecord. User would have set the volume level to minimum. When user will change the output to Speaker, then we will set the AVAudioSessionCategory to AVAudioSessionCategoryPlayback. The expectation is, the volume level should be of AVAudioSessionCategoryPlayback what was set earlier instead we are seeing the volume level stays as minimum which was set to AVAudioSessionCategoryPlayAndRecord Could you please explain about this inconsistency w.r.t Volume level.
6
0
902
Mar ’26
AVAudioFile.read extremely slow after seeking in FLAC and MP3 files
I'm developing an audio player app that uses AVAudio​File to read PCM data from various formats. I'm experiencing severe performance issues when seeking in FLAC, while other compressed formats (M4A/AAC) work correctly. I don't intend to use them in my app, but I also tested mp3 files just by curiosity and they also have this issue. Environment: macOS 26 (Tahoe) Xcode 26.3 Apple Silicon (M1) The issue: After setting AVAudio​File​.frame​Position to a position mid-file, the subsequent call to AVAudio​File​.read(into​:frame​Count:) blocks for an unreasonable amount of time for FLAC and MP3 files. The delay scales linearly with the seek target, seeking near the beginning is fast, seeking toward the end is proportionally slower, which suggests the decoder is decoding linearly from the beginning of the file rather than using any seek index. (My app deals with “images” of Audio CDs ripped as a single long audio file.) The issue is particularly severe when reading files from an SMB network share (server on Ethernet, client on Wi-Fi with the access point ~2 meters away in line of sight). Quick Benchmark results: I tested with the same 75-minute audio content (16-bit/44.1 kHz stereo, 200,502,708 frames) encoded in five formats, seeking to the midpoint. Over SMB (Local Network, Server on Ethernet, Client on WiFi): Format | Seek + Read Time ----------|------------------ WAV | 0.007 s AIFF | 0.009 s Apple | 0.015 s Lossless | MP3 | 9.2 s FLAC | 30.2 s Locally (MacBook Air M1 SSD) : Format | Seek + Read Time ----------|------------------ WAV | 0.0005 s AIFF | 0.0004 s Apple | 0.0011 s Lossless | MP3 | 0.1958 s FLAC | 0.7528 s WAV, AIFF, and M4A all seek virtually instantly (< 15 ms). MP3 and FLAC exhibit linear-time behavior, with FLAC being the worst affected. Note that M4A (AAC) is also a compressed format that requires decoding after seeking, yet it completes in 15 ms. This rules out any inherent limitation of compressed formats, the MP4 container's packet index (stts/stco) is clearly being used for fast random access. Both MP3 (Xing/LAME TOC) and FLAC (SEEKTABLE metadata block) have their own seek mechanisms that should provide similar performance. Minimal CLI tool to reproduce: import Foundation guard CommandLine.arguments.count > 1 else { print("Usage: FLACSpeed <audio-file-path>") exit(1) } let path = CommandLine.arguments[1] let fileURL = URL(fileURLWithPath: path) do { let file = try AVAudioFile(forReading: fileURL) let format = file.processingFormat let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: 8192)! let totalFrames = file.length let seekTarget = totalFrames / 2 print("File: \(fileURL.lastPathComponent)") print("Format: \(format)") print("Total frames: \(totalFrames)") print("Seeking to frame: \(seekTarget)") file.framePosition = seekTarget let start = CFAbsoluteTimeGetCurrent() try file.read(into: buffer, frameCount: 8192) let elapsed = CFAbsoluteTimeGetCurrent() - start print("Read after seek took \(elapsed) seconds") } catch { print("Error: \(error.localizedDescription)") exit(1) } Expected behavior: AVAudio​File​.read(into​:frame​Count:) after setting frame​Position should use the available seek mechanisms in FLAC and MP3 files for fast random access, as it already does for M4A (AAC). Even accounting for the fact that seek tables provide approximate (not sample-precise) positioning, the "jump to nearest index point + decode forward" approach should complete in milliseconds, not seconds. Workaround: For FLAC, I've worked around this by using libFLAC directly, which provides instant seeking via FLAC__stream​_decoder​_seek​_absolute(). libFLAC Performance: For comparison, libFLAC's FLAC__stream​_decoder​_seek​_absolute() performs the same seek + read on the same FLAC file in around 0.015, using the FLAC seek table to jump to the nearest preceding seek point, then decoding forward a small number of frames to the exact target sample.
1
1
341
Mar ’26
MacOS system audio capture low volume with multichannel soundcards
I am building an app that uses system audio capture. This works well for 2-channel sound cards, but as soon as the interface has more than 2 outputs, the capture volume is very low. Does anyone have tips on where to look? Capturing neither before nor after the mix doesn't solve it.
Replies
0
Boosts
0
Views
23
Activity
2h
AVAudioEngine startAndReturnError is now failing
I have a keyboard in my iOS Morse Code app that has always been able to play audio via AVAudioEngine. Recently it has been failing to produce audio. I see that startAndReturnError: is now failing with this error: Error Domain=com.apple.coreaudio.avfaudio Code=268435459 "(null)" UserInfo={failed call=err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)} What's going on? Have keyboards lost the ability to play audio? Here's how I set things up: _engine = [AVAudioEngine new]; _prefs = [[NSUserDefaults alloc] initWithSuiteName:kSharedAppGroupID]; AVAudioMixerNode* mainMixerNode = _engine.mainMixerNode; AVAudioOutputNode* outputNode = _engine.outputNode; AVAudioFormat* format = [outputNode inputFormatForBus:0]; AVAudioFormat* inputFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32 sampleRate:44100 channels:1 interleaved:NO]; self.srcNode = [[AVAudioSourceNode alloc] initWithRenderBlock:^OSStatus(BOOL* _Nonnull isSilence, const AudioTimeStamp* _Nonnull timestamp, AVAudioFrameCount frameCount, AudioBufferList* _Nonnull outputData) { // This block builds the data, but is never called, so it is not the culprit. }]; [_engine attachNode:self.srcNode]; [_engine connect:self.srcNode to:mainMixerNode format:inputFormat]; [_engine connect:mainMixerNode to:_engine.outputNode format:nil]; [_engine prepare];
Replies
0
Boosts
0
Views
73
Activity
5d
macOS 26 – NSSound/CoreAudio causes SIGILL crash in caulk allocator
Hi everyone, We are the engineering team behind an enterprise communications application for macOS. We are experiencing a critical crash on macOS 26 that did not occur on any previous macOS version. We are seeking clarification from Apple engineers or anyone who may have insight into this behaviour. Environment Architecturex86_64macOS26.4.1 (25E253)HardwareMac15,13 (MacBook Pro)ExceptionSIGILL / ILL_ILLOPCCrashed ThreadThread 0 (Main Thread)TriggerPlaying a notification sound via NSSound during an incoming call Crash Stack 0 caulk consolidating_free_map::maybe_create_free_node + 119 ← SIGILL 1 caulk tiered_allocator + 1469 2 caulk exported_resource::do_allocate + 15 3 AudioToolboxCore EABLImpl::create + 204 4 CoreAudio AUNotQuiteSoSimpleTimeFactory + 33267 8 AudioToolboxCore AudioUnitInitialize + 189 9 AudioToolbox XAudioUnit::Initialize + 19 10 AudioToolbox MESubmixGraph::initialize + 125 11 AudioToolbox MESubmixGraph::connectInputChannel + 1172 12 AudioToolbox MEDeviceStreamClient::AddRunningClient + 509 15 AudioToolbox AudioQueueObject::StartRunning + 194 16 AudioToolbox AudioQueueObject::Start + 1447 22 AudioToolbox AQ::API::V2Impl::AudioQueueStartWithFlags + 805 23 AVFAudio AVAudioPlayerCpp::playQueue + 354 24 AVFAudio AVAudioPlayerCpp::DoAction + 134 25 AVFAudio -[AVAudioPlayer play] + 26 26 AppKit -[NSSound play] + 100 27 Our App -[AudioHelper tryToStartSound:ofType:] + 569 28 Our App block_invoke + 59 Behaviour Difference Between macOS Versions The exact same code path that triggers this crash on macOS 26 works without any issue on macOS 14 and macOS 15 — no crash, no warning, no log output of any kind. The crash occurs inside Apple's private caulk memory allocator during CoreAudio audio engine initialisation, triggered by a call to [NSSound play]. The SIGILL / ILL_ILLOPC at maybe_create_free_node + 119 suggests a hard ud2 trap — an intentional abort guard inserted at compile time. This strongly suggests that something changed in macOS 26 within NSSound / CoreAudio / caulk that causes this code path to fail in a way it previously did not. Questions We have the following specific questions: Was there a deliberate threading policy change in NSSound / CoreAudio in macOS 26? Is the SIGILL in caulk::consolidating_free_map::maybe_create_free_node an intentional thread-affinity assertion introduced in macOS 26? Are there any other NSSound / AVAudioPlayer / AudioQueue APIs that have similarly tightened their requirements in macOS 26 that we should be aware of? Is there a migration guide, release note, or WWDC session that covers CoreAudio changes in macOS 26 that we may have missed? Has anyone else in the developer community encountered a similar SIGILL crash in caulk on macOS 26 during audio playback?
Replies
3
Boosts
0
Views
581
Activity
5d
Entitlement "com.apple.developer.carplay-driving-task" not allowing audio playback for voice controlled interaction
According to https://developer.apple.com/download/files/CarPlay-Developer-Guide.pdf , apps with entitlement com.apple.developer.carplay-driving-task are allowed to use voice control. In my current implementation the voice recording working fine but the voice response (AVPlayer with category "playback set") does not output any audio. I suspect that it is a entitlement limitation because if I quickly tap to play a music while the voice assistant AVPlayer is "playing", then I can hear the response, but without this trick it stays playing but mute. In parallel I have now requested com.apple.developer.carplay-voice-based-conversation entitlement , but I don't even know if when approved I will be able to use 2 entitlement for the same CarPlay app. Long story short: 1 - Should an app be able to play audio responses when it's CarPlay entitlement is com.apple.developer.carplay-driving-task? 2 - If not, can I combine entitlements com.apple.developer.carplay-driving-task and com.apple.developer.carplay-voice-based-conversation?
Replies
0
Boosts
0
Views
245
Activity
1w
MusicKit developer token returns 401 on all catalog endpoints
My MusicKit developer token returns 401 (empty body) on every Apple Music API catalog endpoint. I've tried two different keys — both fail identically. Setup: Team ID: K79RSBVM9G Key ID: URNQV5UDGB (MusicKit enabled, associated with Media ID media.audio.explore.musickit) Apple Developer Program License Agreement accepted April 14, 2026 Token format (matches docs exactly): Header: {"alg":"ES256","kid":"URNQV5UDGB"} Payload: {"iss":"K79RSBVM9G","iat":,"exp":<now+15777000>} What works: /v1/storefronts/us returns 200 What fails: Every catalog endpoint returns 401 with empty body: /v1/catalog/us/search?types=artists&term=test /v1/catalog/us/artists/5920832 /v1/catalog/us/genres /v1/test The token self-verifies (signature is valid). I've tried with and without typ:"JWT", with the origin claim, and with a manually signed JWT bypassing the jsonwebtoken library. Same 401 every time. What am I missing?
Replies
0
Boosts
1
Views
168
Activity
1w
Is Push to Talk appropriate for a voice-based interactive assistant (not a walkie-talkie app)?
Hello, Looking for guidance from Apple engineers or developers who have used Push to Talk in production I am developing an iOS application called Companion AI / Theo Voice, designed for elderly users. The goal of the app is to provide a simple, voice-first interactive assistant that enables: natural voice interaction (no typing required) daily assistance (reminders, well-being, conversation) bidirectional voice communication (the user can immediately respond by voice) ⸻ How it works The app operates in two main modes: Conversation mode the user opens the app the assistant speaks the user replies naturally by voice Proactive mode in specific useful situations (e.g. medication reminders, check-ins) the app initiates a voice interaction the user can respond immediately ⸻ Important constraints there is no continuous listening the microphone is only active during interactions users can disable proactive interactions frequency is limited and user-controlled ⸻ Question We are considering using the Push to Talk framework in order to: allow the app to be awakened in the background initiate a voice interaction enable immediate voice response from the user Would this usage be considered aligned with the intended use of Push to Talk? Are there any specific recommendations to ensure compliance with App Store Review Guidelines? Thank you very much for your guidance.
Replies
0
Boosts
0
Views
160
Activity
1w
Bug: Channels erroneously populated when sending audio from an iPhone to a linux gadget audio device.
I have a device which is using linux gadget audio to receive audio input via USB, exposing 24 capture channels. This device works well with Mac, Windows, and Android phones. However, when sending audio from an iPhone (both USB-C iPhones and lightning iPhones using an official Apple lightning -> usb adaptor) I am seeing strange behaviour. Audio which is sent from the iPhone to any one of inputs 12, 19, 20, 21, or 22 appears in all of those channels, rather than only the channel to which audio is routed. I have confirmed on my linux device that these channels are not being erroneously populated by the software running on that device; the issue is visible in audio recorded directly from the gadget using arecord, meaning it is present in the audio being sent from the iPhone. I have confirmed that the gadget channel mask is correct for 24 channel audio (0xFFFFFF). As said above, audio routed to this device from any non-iPhone device (Mac, Windows, Android) works fine. The only sensible conclusion seems to be that the iPhone is populating the additional channels erroneously due to some bug in CoreAudio's handling of gadget audio devices. I would appreciate any insight on this from Apple developers, or from anyone else who has come across this issue and found a workaround.
Replies
0
Boosts
0
Views
253
Activity
2w
tvOS: Background audio + local caching works on Simulator but stops on real Apple TV device
Description: I’m developing a tvOS app using SwiftUI where we play background audio (music) in the Welcome screen, with support for offline playback via local caching. Feature Overview: App fetches audio metadata from API Starts streaming audio (HLS .m3u8) immediately In parallel, downloads the raw audio file (.mp3) Once download completes: Switches playback from streaming → local file On next launch (offline mode), app plays audio from local storage Issue: This flow works perfectly on the Simulator, but on a real Apple TV device: Audio plays for a few seconds (2–5 sec) and then stops Especially after switching from streaming → local file No explicit AVPlayer error is logged Playback sometimes stops after UI updates or periodic API refresh Implementation Details: Using AVPlayer with AVPlayerItem Background audio controlled via a shared manager (singleton) Files stored locally using FileManager (currently using .cachesDirectory) Switching playback using: player.replaceCurrentItem(with: AVPlayerItem(url: localURL)) player.play() Observations: Works reliably on Simulator On device: -- Playback stops silently -- Seems related to lifecycle, buffering, or file access No issues when continuously streaming (without switching to local) Questions: Is there any limitation or known issue with AVPlayer when switching from streaming (HLS) to local file playback on tvOS? Are there specific requirements for playing locally cached media files on tvOS (e.g., file location, permissions, or sandbox behavior)? What is the recommended storage location and size limit for cached media files on tvOS? We understand tvOS has limited persistent storage Is .cachesDirectory the correct approach for this use case? Are there known differences in AVPlayer behavior between Simulator and real Apple TV devices (especially regarding buffering or lifecycle)? What is the recommended approach for implementing offline background audio on tvOS apps? Goal: We want to implement a reliable system where: Audio streams initially Seamlessly switches to local file after download Continues playing without interruption Supports offline playback on subsequent launches Any guidance or best practices would be greatly appreciated. Thank you!
Replies
0
Boosts
0
Views
144
Activity
2w
tvOS: Background audio + local caching works on Simulator but stops on real Apple TV device
Description: I’m developing a tvOS app using SwiftUI where we play background audio (music) in the Welcome screen, with support for offline playback via local caching. 🔹 Feature Overview App fetches audio metadata from API Starts streaming audio (HLS .m3u8) immediately In parallel, downloads the raw audio file (.mp3) Once download completes: Switches playback from streaming → local file On next launch (offline mode), app plays audio from local storage 🔹 Issue This flow works perfectly on the Simulator, but on a real Apple TV device: Audio plays for a few seconds (2–5 sec) and then stops Especially after switching from streaming → local file No explicit AVPlayer error is logged Playback sometimes stops after UI updates or periodic API refresh 🔹 Implementation Details Using AVPlayer with AVPlayerItem Background audio controlled via a shared manager (singleton) Files stored locally using FileManager (currently using .cachesDirectory) Switching playback using: player.replaceCurrentItem(with: AVPlayerItem(url: localURL)) player.play() 🔹 Observations Works reliably on Simulator On device: Playback stops silently Seems related to lifecycle, buffering, or file access No issues when continuously streaming (without switching to local) 🔹 Questions Is there any limitation or known issue with AVPlayer when switching from streaming (HLS) to local file playback on tvOS? Are there specific requirements for playing locally cached media files on tvOS (e.g., file location, permissions, or sandbox behavior)? What is the recommended storage location and size limit for cached media files on tvOS? We understand tvOS has limited persistent storage Is .cachesDirectory the correct approach for this use case? Are there known differences in AVPlayer behavior between Simulator and real Apple TV devices (especially regarding buffering or lifecycle)? What is the recommended approach for implementing offline background audio on tvOS apps? 🔹 Goal We want to implement a reliable system where: Audio streams initially Seamlessly switches to local file after download Continues playing without interruption Supports offline playback on subsequent launches Any guidance or best practices would be greatly appreciated. Thank you!
Replies
1
Boosts
0
Views
221
Activity
2w
Radio stations unable to play on Android with MusicKit SDK
Radio stations are currently not supported by the MusicKit SDK for Android. The SDK has not been updated for years now. It lacks pretty big features of Apple Music
Replies
1
Boosts
0
Views
357
Activity
3w
Issues with monitoring and changing WebRTC audio output device in WKWebView
I am developing a VoIP app that uses WebRTC inside a WKWebView. Question 1: How can I monitor which audio output device WebRTC is currently using? I want to display this information in the UI for the user . Question 2: How can I change the current audio output device for WebRTC? I am using a JS Bridge to Objective-C code, attempting to change the audio device with the following code: void set_speaker(int n) { session = [AVAudioSession sharedInstance]; NSError *err = nil; if (n == 1) { [session overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:&err]; } else { [session overrideOutputAudioPort:AVAudioSessionPortOverrideNone error:&err]; } } However, this approach does not work. I am testing on an iPhone with iOS 16.7. Is a higher iOS version required?
Replies
1
Boosts
0
Views
230
Activity
3w
CarPlay: Voice Conversational Entitlement Details
With the Voice Conversational Entitlement, can a CarPlay app establish a turn-based audio interface that operates in two modes: Speaking mode: Audio Session configured for playback Buffered audio Listening mode: Switch Audio Session to .record or .playAndRecord Activate SFSpeechRecognizer And continue toggling back and forth. The app should listen for responses to questions or other audio cues, and assuming those answers are correct (based on analysis of results from SFSpeechRecognizer), continue this pattern of mode 1 and 2 alternating. This appears to be a valid use of this entitlement. Does this also require the Audio App Entitlement, or is the Voice Conversational Entitlement sufficient? Are there other obstacles to this type of app that I'm not seeing? Or perhaps this is technically possible, but unlikely to pass app store review?
Replies
0
Boosts
0
Views
198
Activity
4w
Android MusicKit canSetRadioLikeState and setRadioLikeState
The Android MusicKit documentation documents two functions that are not actually exposed/added to the SDK. https://developer.apple.com/musickit/android/com/apple/android/music/playback/controller/MediaPlayerController.html#canSetRadioLikeState-- https://developer.apple.com/musickit/android/com/apple/android/music/playback/controller/MediaPlayerController.html#setRadioLikeState-int- Is the documentation stale or is the SDK out of date?
Replies
0
Boosts
0
Views
148
Activity
4w
CarPlay outputs no audio
I have an application that includes custom artwork for the album cover and text details setup with the MPRemoteCommandCenter.shared() reference. I need the user to have a full featured "now playing" display to see all of this. My experience is that cannot find a set of parameters for AVAudioSession.setCategory() that route audio successfully, and display the full featured now playing deck. If I use .playAndRecord, the audio I send out plays out on the radio. But, the now-playing deck is empty and nothing I do with the command center seems to change that. If I instead use .playback, I cannot use .defaultToSpeaker option which is the only way I've found to cause the "now-playing" navigation button to appear so that the full featured deck will display. But, of course setCategory() fails with an error about .defaultToSpeaker only available with .playAndRecord, so some default or intermediate state is entered and I see the full featured deck, but no audio goes out to the radio. What combination is supposed to be used here and is this more likely a problem with thread use (@MainActor) and/or some ordering of operations that I've overlooked?
Replies
3
Boosts
0
Views
198
Activity
Apr ’26
Is iTunesTagging no longer support?
I'm currently trying to develope ipod control function on IVI for vehicle. From previous experience I remember we need to implement iTunetagging, but since I can't find it in Accessory Firmware Specification R46, I'm wondering whether iTunesTagging is no longer support. Thanks in advance for you support!
Replies
0
Boosts
0
Views
75
Activity
Apr ’26
How to hide route button `showsRouteButton = false` in `MPVolumeView` without deprecation warning?
MPVolumeView's showsRouteButton was deprecated (https://developer.apple.com/documentation/mediaplayer/mpvolumeview/showsroutebutton?language=objc). It's not clear how can we now hide this button without deprecation warning. The documentation is lacking. Please advise. Thank you!
Replies
4
Boosts
0
Views
382
Activity
Mar ’26
After upgrade to iOS 26.4, averagePowerLevel and peakHoldLevel are stuck -120
We have an application that capture audio and video. App captures audio PCM on internal or external microphone and displays audio level on the screen. App was working fine for many years but after iOS 26.4 upgrade, averagePowerLevel and peakHoldLevel are stuck to -120 values. Any suggestion?
Replies
6
Boosts
5
Views
764
Activity
Mar ’26
Does the OS has dedicated volume levels for each AVAudioSessionCategory.
We have an VoiP application, our application can be configured to amplify the PCM samples before feeding it to the Player to achieve volume gain at the receiver. In order to support this, We follow as below. If User has configured this Gain Settings within application, Application applies the amplification for the samples to introduce the gain. Application will also set the AVAudioSessionCategory to AVAudioSessionCategoryPlayback Provided the User has chosen the output to Speaker. This settings was working for us but we see there is a difference in behaviour w.r.t Volume Level System Settings between OS 26.3.1 and OS 26.4 When user has chosen earpiece as Output, then we will set the AVAudioSessionCategory to AVAudioSessionCategoryPlayAndRecord. User would have set the volume level to minimum. When user will change the output to Speaker, then we will set the AVAudioSessionCategory to AVAudioSessionCategoryPlayback. The expectation is, the volume level should be of AVAudioSessionCategoryPlayback what was set earlier instead we are seeing the volume level stays as minimum which was set to AVAudioSessionCategoryPlayAndRecord Could you please explain about this inconsistency w.r.t Volume level.
Replies
6
Boosts
0
Views
902
Activity
Mar ’26
CoreAudio: Specification of Private Aggregate or Tap
If a Tap or AggregateDevice with the Private property set is created, does it automatically disappear when the process ends? If not, how can I remove the Tap or AggregateDevice before the main process terminates?
Replies
0
Boosts
0
Views
288
Activity
Mar ’26
AVAudioFile.read extremely slow after seeking in FLAC and MP3 files
I'm developing an audio player app that uses AVAudio​File to read PCM data from various formats. I'm experiencing severe performance issues when seeking in FLAC, while other compressed formats (M4A/AAC) work correctly. I don't intend to use them in my app, but I also tested mp3 files just by curiosity and they also have this issue. Environment: macOS 26 (Tahoe) Xcode 26.3 Apple Silicon (M1) The issue: After setting AVAudio​File​.frame​Position to a position mid-file, the subsequent call to AVAudio​File​.read(into​:frame​Count:) blocks for an unreasonable amount of time for FLAC and MP3 files. The delay scales linearly with the seek target, seeking near the beginning is fast, seeking toward the end is proportionally slower, which suggests the decoder is decoding linearly from the beginning of the file rather than using any seek index. (My app deals with “images” of Audio CDs ripped as a single long audio file.) The issue is particularly severe when reading files from an SMB network share (server on Ethernet, client on Wi-Fi with the access point ~2 meters away in line of sight). Quick Benchmark results: I tested with the same 75-minute audio content (16-bit/44.1 kHz stereo, 200,502,708 frames) encoded in five formats, seeking to the midpoint. Over SMB (Local Network, Server on Ethernet, Client on WiFi): Format | Seek + Read Time ----------|------------------ WAV | 0.007 s AIFF | 0.009 s Apple | 0.015 s Lossless | MP3 | 9.2 s FLAC | 30.2 s Locally (MacBook Air M1 SSD) : Format | Seek + Read Time ----------|------------------ WAV | 0.0005 s AIFF | 0.0004 s Apple | 0.0011 s Lossless | MP3 | 0.1958 s FLAC | 0.7528 s WAV, AIFF, and M4A all seek virtually instantly (< 15 ms). MP3 and FLAC exhibit linear-time behavior, with FLAC being the worst affected. Note that M4A (AAC) is also a compressed format that requires decoding after seeking, yet it completes in 15 ms. This rules out any inherent limitation of compressed formats, the MP4 container's packet index (stts/stco) is clearly being used for fast random access. Both MP3 (Xing/LAME TOC) and FLAC (SEEKTABLE metadata block) have their own seek mechanisms that should provide similar performance. Minimal CLI tool to reproduce: import Foundation guard CommandLine.arguments.count > 1 else { print("Usage: FLACSpeed <audio-file-path>") exit(1) } let path = CommandLine.arguments[1] let fileURL = URL(fileURLWithPath: path) do { let file = try AVAudioFile(forReading: fileURL) let format = file.processingFormat let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: 8192)! let totalFrames = file.length let seekTarget = totalFrames / 2 print("File: \(fileURL.lastPathComponent)") print("Format: \(format)") print("Total frames: \(totalFrames)") print("Seeking to frame: \(seekTarget)") file.framePosition = seekTarget let start = CFAbsoluteTimeGetCurrent() try file.read(into: buffer, frameCount: 8192) let elapsed = CFAbsoluteTimeGetCurrent() - start print("Read after seek took \(elapsed) seconds") } catch { print("Error: \(error.localizedDescription)") exit(1) } Expected behavior: AVAudio​File​.read(into​:frame​Count:) after setting frame​Position should use the available seek mechanisms in FLAC and MP3 files for fast random access, as it already does for M4A (AAC). Even accounting for the fact that seek tables provide approximate (not sample-precise) positioning, the "jump to nearest index point + decode forward" approach should complete in milliseconds, not seconds. Workaround: For FLAC, I've worked around this by using libFLAC directly, which provides instant seeking via FLAC__stream​_decoder​_seek​_absolute(). libFLAC Performance: For comparison, libFLAC's FLAC__stream​_decoder​_seek​_absolute() performs the same seek + read on the same FLAC file in around 0.015, using the FLAC seek table to jump to the nearest preceding seek point, then decoding forward a small number of frames to the exact target sample.
Replies
1
Boosts
1
Views
341
Activity
Mar ’26