Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

AVAudioEngine Voice Processing Fails with Mismatched Input/Output Devices: AggregateDevice Channel Count Mismatch
I'm encountering errors while using AVAudioEngine with voice processing enabled (setVoiceProcessingEnabled(true)) in scenarios where the input and output audio devices are not the same. This issue arises specifically with mismatched devices, preventing the application from functioning as expected. Works: Paired devices (e.g., MacBook Pro mic → MacBook Pro speakers) Fails: Mismatched devices (e.g., AirPods mic → MacBook Pro speakers) When using paired input and output devices: The setup works as expected. Example: MacBook Pro microphone → MacBook Pro speakers. When using mismatched devices: AVAudioEngine setup fails during aggregate device construction. Example: AirPods microphone → MacBook Pro speakers. Error logs indicate a channel count mismatch. Here are the partial logs. Due to the content limit, I cannot post the entire logs. AUVPAggregate.cpp:1000 client-side input and output formats do not match (err=-10875) AUVPAggregate.cpp:1036 err=-10875 AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error -10875 AggregateDevice.mm:329 Failed expectation of constructed aggregate (312): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (312): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (336): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (336): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AudioHardware-mac-imp.cpp:3484 AudioDeviceSetProperty: no device with given ID AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (348): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (348): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) Is it possible to use voice processing with different input/output devices? If yes, are there any specific configurations required to handle mismatched devices? How can we resolve channel count mismatch errors during aggregate device construction? Are there settings or API adjustments to enforce compatibility between input/output devices? Are there any workarounds or alternative approaches to achieve voice processing functionality with mismatched devices? For instance, can we force an intermediate channel configuration or downmix input/output formats?
0
0
44
11h
Sound not working on testflight / Appstore
I have a flutter iOS app that has some simple sound FX for button clicks, swipes, etc. In simulator and on real device the sound works fine, but when i upload the app to testflight (and App store) the sound FX don't play. When I upload the app to my phone via xcode I am using the release profile so I don't see what the difference could be. I have also gone through the archive that i uploaded and verified that the sound files are indeed there. I have other flutter apps that use sound but non since the iOS 26 update. I've tried 3 different flutter sound libraries and all face the same issue. Wondering if anyone else is seeing this issue or if I'm missing a simple permission or something that has changed recently? Thanks in advanced
1
0
114
1d
Video Audio + Speech To Text
Hello, I am wondering if it is possible to have audio from my AirPods be sent to my speech to text service and at the same time have the built in mic audio input be sent to recording a video? I ask because I want my users to be able to say "CAPTURE" and I start recording a video (with audio from the built in mic) and then when the user says "STOP" I stop the recording.
1
0
281
2d
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
1
1
588
1w
AVAudioUnitSampler Bug with Consolidated Audio Files
Hello, I've discovered a buffer initialization bug in AVAudioUnitSampler that happens when loading presets with multiple zones referencing different regions in the same audio file (monolith/concatenated samples approach). Almost all zones output silence (i.e. zeros) at the beginning of playback instead of starting with actual audio data. The Problem Setup: Single audio file (monolith) containing multiple concatenated samples Multiple zones in an .aupreset, each with different sample start and sample end values pointing to different regions of the same file All zones load successfully without errors Expected Behavior: All zones should play their respective audio regions immediately from the first sample. Actual Behavior: Last zone in the zone list: Works perfectly - plays audio immediately All other zones: Output [0, 0, 0, 0, ..., _audio_data] instead of [real_audio_data] The number of zeros varies from event to event for each zone. It can be a couple of samples (<30) up to several buffers. After the initial zeros, the correct audio plays normally, so there is no shift in audio playback, just missing samples at the beginning. Minimal Reproduction 1. Create Test Monolith Audio File Create a single Wav file with 3 concatenated 1-second samples (44.1kHz): Sample 1: frames 0-44099 (constant amplitude 0.3) Sample 2: frames 44100-88199 (constant amplitude 0.6) Sample 3: frames 88200-132299 (constant amplitude 0.9) 2. Create Test Preset Create an .aupreset with 3 zones all referencing the same file: Pseudo code <Zone array> <zone 1> start : 0, end: 44099, note: 60, waveform: ref_to_monolith.wav; <zone 2> start sample: 44100, note: 62, end sample: 88199, waveform: ref_to_monolith.wav; <zone 3> start sample: 88200, note: 64, end sample: 132299, waveform: ref_to_monolith.wav; </Zone array> 3. Load and Test // Load preset into AVAudioUnitSampler let sampler = AVAudioUnitSampler() try sampler.loadAudioFiles(from: presetURL) // Play each zone (MIDI notes C4=60, D4=62, E4=64) sampler.startNote(60, withVelocity: 64, onChannel: 0) // Zone 1 sampler.startNote(62, withVelocity: 64, onChannel: 0) // Zone 2 sampler.startNote(64, withVelocity: 64, onChannel: 0) // Zone 3 4. Observed Result Zone 1 (C4): [0, 0, 0, ..., 0.3, 0.3, 0.3] ❌ Zeros at beginning Zone 2 (D4): [0, 0, 0, ..., 0.6, 0.6, 0.6] ❌ Zeros at beginning Zone 3 (E4): [0.9, 0.9, 0.9, ...] ✅ Works correctly (last zone) What I've Extensively Tested What DOES Work Separate files per zone: Each zone references its own individual audio file All zones play correctly without zeros Problem: Not viable for iOS apps with 500+ sample libraries due to file handle limitations What DOESN'T Work (All Tested) 1. Different Audio Formats: CAF (Float32 PCM, Int16 PCM, both interleaved and non-interleaved) M4A (AAC compressed) WAV (uncompressed) SF2 (SoundFont2) Bug persists across all formats 2. CAF Region Chunks: Created CAF files with embedded region chunks defining zone boundaries Set zones with no sampleStart/sampleEnd in preset (nil values) AVAudioUnitSampler completely ignores CAF region metadata Bug persists 3. Unique Waveform IDs: Gave each zone a unique waveform ID (268435456, 268435457, 268435458) Each ID has its own file reference entry (all pointing to same physical file) Hypothesized this might trigger separate buffer initialization Bug persists - no improvement 4. Different Sample Rates: Tested: 44.1kHz, 48kHz, 96kHz Bug occurs at all sample rates 5. Mono vs Stereo: Bug occurs with both mono and stereo files Environment macOS: Sonoma 14.x (tested across multiple minor versions) iOS: Tested on iOS 17.x with same results Xcode: 16.x Frameworks: AVFoundation, AudioToolbox Reproducibility: 100% reproducible with setup described above Impact & Use Case This bug severely impacts professional music applications that need: Small file sizes: Monolith files allow sharing compressed audio data (AAC/M4A) iOS file handle limits: Opening 400+ individual sample files is not viable on iOS Performance: Single file loading is much faster than hundreds of individual files Standard industry practice: Monolith/concatenated samples are used by EXS24, Kontakt, and most professional samplers Current Impact: Cannot use monolith files with AVAudioUnitSampler on iOS Forced to choose between: unusable audio (zeros at start) OR hitting iOS file limits No viable workaround exists Root Cause Hypothesis The bug appears to be in AVAudioUnitSampler's internal buffer initialization when: Multiple zones share the same source audio file Each zone specifies different sampleStart/sampleEnd offsets Key observation: The last zone in the zone array always works correctly. This is NOT related to: File permissions or security-scoped resources (separate files work fine) Audio codec issues (happens with uncompressed PCM too) Preset parsing (preset loads correctly, all zones are valid) Questions Is this a known issue? I couldn't find any documentation, bug reports, or discussions about this. Is there ANY workaround that allows monolith files to work with AVAudioUnitSampler? Alternative APIs? Is there a different API or approach for iOS that properly supports monolith sample files?
0
0
268
1w
Switching default input/output channels using Core Audio
I wrote a Swift macOS app to control a PCI audio device. The code switches between the default output and input channels. As soon as I launch the Audio-Midi Setup utility, channel switching stops working. The driver properties allow switching, but the system doesn't respond. I have to delete the contents of /Library/Preferences/Audio and reset Core Audio. What am I missing? func setDefaultChannelsOutput() { guard let deviceID = getDeviceIDByName(deviceName: "PCI-424") else { return } let selectedIndex = DefaultChannelsOutput.indexOfSelectedItem if selectedIndex < 0 || selectedIndex >= 24 { return } let channel1 = UInt32(selectedIndex * 2 + 1) let channel2 = UInt32(selectedIndex * 2 + 2) var channels: [UInt32] = [channel1, channel2] var propertyAddress = AudioObjectPropertyAddress( mSelector: kAudioDevicePropertyPreferredChannelsForStereo, mScope: kAudioDevicePropertyScopeOutput, mElement: kAudioObjectPropertyElementWildcard ) let dataSize = UInt32(MemoryLayout<UInt32>.size * channels.count) let status = AudioObjectSetPropertyData(deviceID, &propertyAddress, 0, nil, dataSize, &channels) if status != noErr { print("Error setting default output channels: \(status)") } }
0
0
236
1w
watchOS longFormAudio cannot de active
My workout watch app supports audio playback during exercise sessions. When users carry both Apple Watch, iPhone, and AirPods, with AirPods connected to the iPhone, I want to route audio from Apple Watch to AirPods for playback. I've implemented this functionality using the following code. try? session.setCategory(.playback, mode: .default, policy: .longFormAudio, options: []) try await session.activate() When users are playing music on iPhone and trigger my code in the watch app, Apple Watch correctly guides users to select AirPods, pauses the iPhone's music, and plays my audio. However, when playback finishes and I end the session using the code below: try session.setActive(false, options:[.notifyOthersOnDeactivation]) the iPhone doesn't automatically resume the previously interrupted music playback—it requires manual intervention. Is this expected behavior, or am I missing other important steps in my code?
1
0
252
2w
【溦N51888M】腾龙公司会员申请流程步骤
【溦N51888M】腾龙公司会员申请流程步骤【罔纸 211239.com 】输入官惘到浏览器打开联系24小时在线业务人员办理上下,打开公司官网. 二、点击主页右上角注册按钮. 三、填写账号信息. 四、输入手机号,验证码,密码. 五、勾选用户协议,完成注册协议,完成注册. 注意:若出现账号已存在」提示,需重新设置唯一账号名称
0
0
171
2w
coreaudiod display sleep
hi all, as soon an audio is played in a whatever app, coreaudiod inserts a sleep prevent assertion for both, the system AND the display. can i somehow stop the insertion of the display sleep assertion? pid 223(coreaudiod): [0x00004e9e00058dc2] 00:03:18 PreventUserIdleDisplaySleep named: "com.apple.audio.AppleGFXHDAEngineOutputDP:10001:0:{B31A-08C6-00000000}.context.preventuseridledisplaysleep" Created for PID: 4145. where PID 4145 is spotify. but it doesn't matter which app is playing the audio. any help would be appreciated thanks
0
0
49
2w
iOS Speech Error on Mobile Simulator (Error fetching voices)
I'm writing a simple app for iOS and I'd like to be able to do some text to speech in it. I have a basic audio manager class with a "speak" function: import Foundation import AVFoundation class AudioManager { static let shared = AudioManager() var audioPlayer: AVAudioPlayer? var isPlaying: Bool { return audioPlayer?.isPlaying ?? false } var playbackPosition: TimeInterval = 0 func playSound(named name: String) { guard let url = Bundle.main.url(forResource: name, withExtension: "mp3") else { print("Sound file not found") return } do { if audioPlayer == nil || !isPlaying { audioPlayer = try AVAudioPlayer(contentsOf: url) audioPlayer?.currentTime = playbackPosition audioPlayer?.prepareToPlay() audioPlayer?.play() } else { print("Sound is already playing") } } catch { print("Error playing sound: \(error.localizedDescription)") } } func stopSound() { if let player = audioPlayer { playbackPosition = player.currentTime player.stop() } } func speak(text: String) { let synthesizer = AVSpeechSynthesizer() let utterance = AVSpeechUtterance(string: text) utterance.voice = AVSpeechSynthesisVoice(language: "en-GB") synthesizer.speak(utterance) } } And my app shows text in a ScrollView: ScrollView { Text(self.description) .padding() .foregroundColor(.black) .font(.headline) .background(Color.gray.opacity(0)) }.onAppear { AudioManager.shared.speak(text: self.description) } However, the text doesn't get read out (in the simulator). I see some output in the console: Error fetching voices: Swift.DecodingError.dataCorrupted(Swift.DecodingError.Context(codingPath: [], debugDescription: "Invalid container metadata for _UnkeyedDecodingContainer, found keyedGraphEncodingNodeID", underlyingError: nil)). Using fallback voices. I'm probably doing something wrong here, but not sure what.
1
1
224
2w
[AVPlayerItemVideoOutput initWithPixelBufferAttributes:] output attributes setting not work
My app want Converting iphone12 HDR Video to SDR,to edit。 follow the doc Apple-HDR-Convert. My code setting the pixBuffAttributes        [pixBuffAttributes setObject:(id)(kCVImageBufferYCbCrMatrix_ITU_R_709_2) forKey:(id)kCVImageBufferYCbCrMatrixKey];       [pixBuffAttributes setObject:(id)(kCVImageBufferColorPrimaries_ITU_R_709_2) forKey:(id)kCVImageBufferColorPrimariesKey];       [pixBuffAttributes setObject:(id)kCVImageBufferTransferFunction_ITU_R_709_2 forKey:(id)kCVImageBufferTransferFunctionKey];       playerItemOutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:pixBuffAttributes]; but I get the playerItemOutput's output buffer   CFTypeRef colorAttachments = CVBufferGetAttachment(pixelBuffer, kCVImageBufferYCbCrMatrixKey, NULL);     CFTypeRef colorPrimaries = CVBufferGetAttachment(pixelBuffer, kCVImageBufferColorPrimariesKey, NULL);     CFTypeRef colorTransFunc = CVBufferGetAttachment(pixelBuffer, kCVImageBufferTransferFunctionKey, NULL);      NSLog(@"colorAttachments = %@", colorAttachments);     NSLog(@"colorPrimaries = %@", colorPrimaries);     NSLog(@"colorTransFunc = %@", colorTransFunc); log output: colorAttachments = ITU_R_2020 colorPrimaries = ITU_R_2020 colorTransFunc = ITU_R_2100_HLG pixBuffAttributes setting output format invalid,please help!
1
1
795
3w
[26] audioTimeRange would still be interesting for .volatileResults in SpeechTranscriber
So experimenting with the new SpeechTranscriber, if I do: let transcriber = SpeechTranscriber( locale: locale, transcriptionOptions: [], reportingOptions: [.volatileResults], attributeOptions: [.audioTimeRange] ) only the final result has audio time ranges, not the volatile results. Is this a performance consideration? If there is no performance problem, it would be nice to have the option to also get speech time ranges for volatile responses. I'm not presenting the volatile text at all in the UI, I was just trying to keep statistics about the non-speech and the speech noise level, this way I can determine when the noise level falls under the noisefloor for a while. The goal here was to finalize the recording automatically, when the noise level indicate that the user has finished speaking.
6
0
679
3w
How to safely switch between mic configurations on iOS?
I have an iPadOS M-processor application with two different running configurations. In config1, the shared AVAudioSession is configured for .videoChat mode using the built-in microphone. The input/output nodes of the AVAudioEngine are configured with voice processing enabled. The built-in mic is formatted for 1 channel at 48KHz. In config2, the shared AVAudioSession is configured for .measurement mode using an external USB microphone. The input/output nodes of the AVAudioEngine are configured with voice processing disabled. The external mic is formatted for 2 channels at 44.1KHz I've written a configuration manager designed to safely switch between these two configurations. It works by stopping AVAudioEngine and detaching all but the input and output nodes, updating the shared audio session for the desired mic and sample-rates, and setting the appropriate state for voice processing to either true or false as required by the configuration. Finally the new audio graph is constructed by attaching appropriate nodes, connecting them, and re-starting AVAudioEngine I'm experiencing what I believe is a race-condition between switching voice processing on or off and then trying to re-build and start the new audio graph. Even though notifications, which are dumped to the console indicate that my requested input and sample-rate settings are in place, I crash when trying to start the audio engine because the sample-rate is wrong. Investigating further it looks like the switch from remote I/O to voice-processing I/O or vice-versa has not yet actually completed. I introduced a 100ms second delay and that seems to help but is obviously not a reliable way to build software that must work consistently. How can I make sure that what are apparently asynchronous configuration changes to the shared audio session and the input/output nodes have completed before I go on? I tried using route change notifications from the shared AVAudioSession but these lie. They say my preferred mic input and sample-rate setting is in place but when I dump the AVAudioEngine graph to the debugger console, I still see the wrong sample rate assigned to the input/output nodes. Also these are the wrong AU nodes. That is, VPIO is still in place when RIO should be, or vice-versa. How can I make the switch reliable without arbitrary time delays? Is my configuration manager approach appropriate (question for Apple engineers)?
1
0
149
3w
Mac Catalyst: AUv3 Extension no longer works on MacOS, still works on iOS
I have a Catalyst app ('container') which hosts an embedded AUv3 Audio Unit extension ('plugin'). This used to work for years and has worked with this project until a few days ago. it still works on iOS as expected on MacOS the extension is never registered/installed and won't load extension won't show up with AUVal seems to have stopped working with the 26.1 XCode update I'm fairly certain the problem is not code related (i.e. likely build settings, project settings, entitlements, signing, etc.) I have compared all settings with another still-working project and can't find any meaningful difference (I can't request code-level support because even the minimal thing vastly exceeds the 250 lines of code limit.) How can I debug the issue? I literally don't know where to start to fix this problem, short of rebuilding the entire thing and hope that it magically starts working again.
0
0
124
3w
SpeechTranscriber not supported
I've tried SpeechTranscriber with a lot of my devices (from iPhone 12 series ~ iPhone 17 series) without issues. However, SpeechTranscriber.isAvailable value is false for my iPhone 11 Pro. https://developer.apple.com/documentation/speech/speechtranscriber/isavailable I'am curious why the iPhone 11 Pro device is not supported. Are all iPhone 11 series not supported intentionally? Or is there any problem with my specific device? I've also checked the supportedLocales, and the value is an empty array. https://developer.apple.com/documentation/speech/speechtranscriber/supportedlocales
4
0
589
3w
SpeechTranscriber supported Devices
I have the new iOS 26 SpeechTranscriber working in my application. The issue I am facing is how to determine if the device I am running on supports SpeechTranscriber. I was able to create code that tests if the device supports transcription but it takes a bit of time to run and thus the results are not available when the app launches. What I am looking for is a list of what iOS 26 devices it doesn't run on. I think its safe to assume any new devices will support it so if we can just have a list of what devices that can run iOS 26 and not able to do transcription it would be much faster for the app. I have determined it doesn't work on a SE 2nd Gen, it works on iPhone 12, SE 3rd Gen, iPhone 14 Pro, 15 Pro. As the SpeechTranscriber doesn't work in the simulator I can't determine that way. I have checked the docs and it doesn't list the devices it doesn't work on.
1
0
343
3w