According to the header file the outputVolume properties supported range is 0.0-1.0:
/*! @property outputVolume
@abstract The mixer's output volume.
@discussion
This accesses the mixer's output volume (0.0-1.0, inclusive).
@property (nonatomic) float outputVolume;
However when setting the volume to 2.0 the audio does indeed play louder. Is the header file out of date and if so, what is the supported range for outputVolume?
Thanks
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
After update,WeChat voice chatting no sounds, please help
Two issues:
No matter what I set in
try audioSession.setPreferredSampleRate(x)
the sample rate on both iOS and macOS is always 48000 when the output goes through the speaker, and 24000 when my Airpods connect to an iPhone/iPad.
Now, I'm checking the current output loudness to animate a 3D character, using
mixerNode.installTap(onBus: 0, bufferSize: y, format: nil) { [weak self] buffer, time in
Task { @MainActor in
// calculate rms and animate character accordingly
but any buffer size under 4800 is just ignored and the buffers I get are 4800 sized.
This is ok, when the sampleRate is currently 48000, as 10 samples per second lead to decent visual results.
But when AirPods connect, the samplerate is 24000, which means only 5 samples per second, so the character animation looks lame.
My AVAudioEngine setup is the following:
audioEngine.connect(playerNode, to: pitchShiftEffect, format: format)
audioEngine.connect(pitchShiftEffect, to: mixerNode, format: format)
audioEngine.connect(mixerNode, to: audioEngine.outputNode, format: nil)
Now, I'd be fine if the outputNode runs at whatever if it needs, as long as my tap would get at least 10 samples per second.
PS: Specifying my favorite format in the
let format = AVAudioFormat(standardFormatWithSampleRate: 48_000, channels: 2)!
mixerNode.installTap(onBus: 0, bufferSize: y, format: format)
doesn't change anything either
I have a new 2725QC (Dell) Monitor that uses USB-C connection to connect with the iMac (2019, 27 inch) through the back port but the problem is that the volume control can currently only be done from the hardware, not the software control using the Apple keyboard. What should I do in terms of writing code to do this (Swift or Obj-C)? Is there a third-party solution for Intel iMac and ARM Mac?
I am developing an app that uses MusicKit to play music and then I need to have spoken words played to the user, while ducking the audio coming from MusicKit (application music player)
the built in Siri voices are not off sufficient quality so I am using an external service to create an mp3 file and then play this back using AVAudioSession
Sample code below
the problem I am having is that .duckOthers is not ducking the Application Music Player output
Is this a bug or am I doing this wrong?
// Configure audio session for system-wide ducking
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio, options: [.duckOthers, .mixWithOthers])
try AVAudioSession.sharedInstance().setActive(true)
// Set the ducking level to maximum
try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(0.005)
// Create and configure audio player
self.audioPlayer = try AVAudioPlayer(data: audioData)
self.audioPlayer?.delegate = self
self.audioPlayer?.volume = 1.0 // Ensure full volume for speech
self.audioPlayer?.prepareToPlay()
// Set the audio player's settings for maximum clarity
self.audioPlayer?.enableRate = false
self.audioPlayer?.pan = 0.0 // Center the audio
self.audioPlayer?.play()
I’m running the iOS 26.2 Public Beta update and my album artwork is missing from the music app (I’m not using Apple Music). I use google to get my album artwork. Do I need to wait for a new update?
Topic:
Media Technologies
SubTopic:
Audio
Is it possible to find IDR frame (CMSampleBuffer) in AVAsset h264 video file?
Hello, I'm working on a MusicKit based SwiftUI app. I've integrated AirPlay using the AVRoutePickerView like so:
struct UIKitAirPlayPickerView: UIViewRepresentable {
func makeUIView(context: Context) -> AVRoutePickerView {
let routePickerView = AVRoutePickerView()
routePickerView.prioritizesVideoDevices = false
return routePickerView
}
func updateUIView(_ uiView: AVRoutePickerView, context: Context) {}
}
The AirPlay menu appears as expected, and selecting an AirPlay device functions as expected. I'm currently sending audio from my app to a HomePod. However, the state of the AVRoutePickerView does not reflect the playback state. There is no cover art and it says "Not Playing". When my device is locked, my lock screen shows the album art, metadata and AirPlay routing as expected.
My app uses the ApplicationMusicPlayer however I encounter the same behavior using the SystemMusicPlayer.
Any guidance on how to troubleshoot this? Is there any other way to integrate the system AirPlay picker into my app, or is this my only option?
Thank you for reading.
I have an app under development - demo here - https://youtu.be/VbAfUk_eYl0?si=s6EDBx-4G6P_QbZO - which is sort of an audio player for airdropped files - something useful to musicians who dump work in progress to their phone, make notes, revise and update.
I've been testing my handling of audio session interruption notifications, but seems to be a lot of inconsistency in how, when and why iOS delivers them, and I'm wondering if there is some rhyme or reason to it that I'm just not detecting.
For example, I am playing a song in my app. Switch to Apple Music and start playing a song there. My app gets an interruption began notification - this is consistent.
Switch back to my app, and about half the time, I will get an interruption ended notification (coupled often with a blast of the tail of whatever audio buffer was partially played when the interruption started, even though the engine was stopped - and followed by call to my AVAudioPlayerNodeCompletionCallback - is there some way to avoid this?). Half the time I don't get an interruption ended notification; my app can (as expected) end the interruption by activating the AVAudioSession and playing something.
I have not been able to determine any pattern to this behavior, other than that if my app started playing using AVAudioPlayerNode.scheduleSegment rather than scheduleFile I think the notification will be consistently delivered on app activation rather than when I activate the session programmatically.
I would like my app to behave deterministically, and would appreciate any help in deciphering what causes the inconsistent behavior in notifications from iOS.
{
"aps": { "content-available": 1 },
"audio_file_name": "ding.caf",
"audio_url": "https://example.com/audio.mp3"
}
When the app is in the background or killed, it receives a remote APNs push. The data format is roughly as shown above. How can I play the MP3 audio file at the specified "audio_url"? The user does not need to interact with the device when receiving the APNs. How can I play the audio file immediately after receiving it?
Hi, In my project I am using AVFoundation for recording the audio. We are using AVAudioMixerNode class below method to record the audio packet.
**func installTap(
onBus bus: AVAudioNodeBus,
bufferSize: AVAudioFrameCount,
format: AVAudioFormat?,
block tapBlock: @escaping AVAudioNodeTapBlock
)
**
It works perfectly fine.
But in production env some small percentage of the user we are facing issue like after recording few packets it stops automatically without stopping the audio engine. Can anyone help here that why this happens? I have also observed for mediaServicesWereResetNotification and added log on receiving this notification but when this issue happens I don't see any occurence of this log. Also is there any callback when the engine stops?
Currently, I have successfully used ChannelMap to map hardware input channels and obtained audio data from the hardware device's MIC and OTG inputs. Additionally, I have used ChannelMap to map output channels to freely feed data for playback to each output channel. However, I now have a problem.
I have a hardware device that only has output channels (no input channels), and the system has set this hardware device as the default playback device. In this case, how can I obtain the audio data being played to the output channels for modification?
Overview
We are producing audio in real time from an editing application and are trying to put that on an HLS stream. We attempt to submit PCM samples through an audio writer but are getting a crash after a select number of samples have been appended.
Depending on the number of audio frames in the PCM buffer, we might get more iterations before the crash but it always has the same traceback (see below).
Code
The setup is rather simple. We took inspiration from a few sources around the web.
NSMutableDictionary *audio = [[NSMutableDictionary alloc] init];
[audio setObject:@(kAudioFormatMPEG4AAC) forKey:AVFormatIDKey];
[audio setObject:[NSNumber numberWithInt:config.audioSampleRate] // 48000
forKey:AVSampleRateKey];
[audio setObject:[NSNumber numberWithInt:config.audioChannels] // 2
forKey:AVNumberOfChannelsKey];
[audio setObject:@160000 forKey:AVEncoderBitRateKey];
m_audioConfig = [[NSDictionary alloc] initWithDictionary:audio];
m_audio = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio
outputSettings:m_audioConfig];
AVAudioFrameCount audioFrames = BUFFER_SAMPLES * bCount;
AVAudioPCMBuffer *pcmBuffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:m_full.pcmFormat
frameCapacity:audioFrames];
pcmBuffer.frameLength = pcmBuffer.frameCapacity;
AudioChannelLayout layout;
memset(&layout, 0, sizeof(layout));
layout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
CMFormatDescriptionRef format;
OSStatus stats = CMAudioFormatDescriptionCreate(
kCFAllocatorDefault,
pcmBuffer.format.streamDescription,
sizeof(layout),
&layout,
0,
nil,
nil,
&format
);
for (int i = 0; i < bCount; i++)
{
AudioPCM pcm;
audioCallback->callback(pcm);
memcpy(*(pcmBuffer.int16ChannelData) + (bufferSize * i), pcm.data, bufferSize);
}
size_t samplesConsumed = BUFFER_SAMPLES * bCount;
CMSampleBufferRef sampleBuffer;
CMSampleTimingInfo timing;
timing.duration = CMTimeMake(1, config.audioSampleRate);
timing.presentationTimeStamp = presentationTime;
timing.decodeTimeStamp = kCMTimeInvalid;
OSStatus ostatus = CMSampleBufferCreate(
kCFAllocatorDefault,
nil,
false,
nil,
nil,
format,
(CMItemCount)pcmBuffer.frameLength,
1,
&timing,
0,
nil,
&sampleBuffer
);
////
ostatus = CMSampleBufferSetDataBufferFromAudioBufferList(
sampleBuffer,
kCFAllocatorDefault,
kCFAllocatorDefault,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
pcmBuffer.audioBufferList
);
if (ostatus != noErr)
{
NSLog(@"fill audio sample from buffer list failed: %s", logAudioError(ostatus));
return;
}
ostatus = CMSampleBufferSetDataReady(sampleBuffer);
if (ostatus != noErr)
{
NSLog(@"set sample buffer ready failed: %s", logAudioError(ostatus));
return;
}
// Finally we can attach it, then shove the presentation time forward
[m_audio appendSampleBuffer:sampleBuffer];
The Crash
The crash points towards some level of deallocation when the conversion tooling is done or has enough samples to process an output packet? It's had to say.
0 caulk 0x1a1e9532c caulk::alloc::tiered_allocator<caulk::alloc::size_range_tier<0ul, 1008ul, caulk::alloc::tree_allocator<caulk::alloc::chunk_allocator<caulk::alloc::page_allocator, caulk::alloc::bitmap_allocator, caulk::alloc::embed_block_memory, 16384ul, 16ul, 6ul>>>, caulk::alloc::size_range_tier<1009ul, 256000ul, caulk::alloc::guarded_edges_allocator<caulk::alloc::consolidating_free_map<caulk::alloc::page_allocator, 10485760ul>, 4ul>>, caulk::alloc::tracking_allocator<caulk::alloc::page_allocator>>::deallocate(caulk::alloc::block, unsigned long) + 636
1 AudioToolboxCore 0x1993fbfe4 ExtendedAudioBufferList_Destroy + 112
2 AudioToolboxCore 0x1993d5fe0 std::__1::__optional_destruct_base<ACCodecOutputBuffer, false>::~__optional_destruct_base[abi:ne180100]() + 68
3 AudioToolboxCore 0x1993d5f48 acv2::CodecConverter::~CodecConverter() + 196
4 AudioToolboxCore 0x1993d5e5c acv2::CodecConverter::~CodecConverter() + 16
5 AudioToolboxCore 0x1992574d8 std::__1::vector<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>, std::__1::allocator<std::__1::unique_ptr<acv2::AudioConverterBase, std::__1::default_delete<acv2::AudioConverterBase>>>>::__clear[abi:ne180100]() + 84
6 AudioToolboxCore 0x199259acc acv2::AudioConverterChain::RebuildConverterChain(acv2::ChainBuildSettings const&) + 116
7 AudioToolboxCore 0x1992596ec acv2::AudioConverterChain::SetProperty(unsigned int, unsigned int, void const*) + 1808
8 AudioToolboxCore 0x199324acc acv2::AudioConverterV2::setProperty(unsigned int, unsigned int, void const*) + 84
9 AudioToolboxCore 0x199327f08 with_resolved(OpaqueAudioConverter*, caulk::function_ref<int (AudioConverterAPI*)>) + 60
10 AudioToolboxCore 0x1993281e4 AudioConverterSetProperty + 72
11 MediaToolbox 0x1a7566c2c FigSampleBufferProcessorCreateWithAudioCompression + 2296
12 MediaToolbox 0x1a754db08 0x1a70b5000 + 4819720
13 MediaToolbox 0x1a754dab4 FigMediaProcessorCreateForAudioCompressionWithFormatWriter + 100
14 MediaToolbox 0x1a77ebb98 0x1a70b5000 + 7564184
15 MediaToolbox 0x1a7804158 0x1a70b5000 + 7663960
16 MediaToolbox 0x1a7801da0 0x1a70b5000 + 7654816
17 AVFCore 0x1ada530c4 -[AVFigAssetWriterTrack addSampleBuffer:error:] + 192
18 AVFCore 0x1ada55164 -[AVFigAssetWriterAudioTrack _flushPendingSampleBuffersReturningError:] + 500
19 AVFCore 0x1ada55354 -[AVFigAssetWriterAudioTrack addSampleBuffer:error:] + 472
20 AVFCore 0x1ada4ebf0 -[AVAssetWriterInputWritingHelper appendSampleBuffer:error:] + 128
21 AVFCore 0x1ada4c354 -[AVAssetWriterInput appendSampleBuffer:] + 168
22 lib_devapple_hls.dylib 0x115d2c7cc detail::AppleHLSImplementation::audioRuntime() + 1052
23 lib_devapple_hls.dylib 0x115d2d094 void* std::__1::__thread_proxy[abi:ne180100]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void (detail::AppleHLSImplementation::*)(), detail::AppleHLSImplementation*>>(void*) + 72
24 libsystem_pthread.dylib 0x196e5b2e4 _pthread_start + 136
Any insight would be welcome!
I am developing a VOD playback app, but when I stream video to an external monitor connected via HDMI with Lightning on iOS 18 or later, the screen goes dark and I cannot confirm playback.
The app I am developing does not detect the HDMI and display the Player separately, but simply mirrors the video.
We have confirmed that the same phenomenon occurs with other services, but we were able to confirm playback with some services such as Apple TV.
Please let us know if there are any other necessary settings such as video certificates required for video playback.
We would also like to know if the problem occurs with iOS 18 or later.
Topic:
Media Technologies
SubTopic:
Audio
Session player regions populate blank, with no sound media when tracks or regions are created.
I am a graduate student conducting research in speech/audio signal processing and multimodal interaction.
Apple Vision Pro is widely recognized as a multimodal interactive system supporting voice, eye, and gesture inputs. However, I could not find detailed specifications or documentation about the audio input sampling rate used by the device’s built-in microphone array when capturing user audio.
Specifically, I would like to understand:
What is the default audio input sampling rate (e.g., 16 kHz, 44.1 kHz, 48 kHz, etc.) for the Vision Pro’s microphones?
When developing with visionOS / AVAudioSession / AVAudioEngine, is there a documented or recommended sampling rate for audio capture?
Are there any best practices or settings for enabling high-quality voice capture on Vision Pro (especially for voice research tasks)?
For context, my work involves voice processing, analysis, and possibly on-device real-time speech recognition. Any pointers to relevant APIs, documentation or examples (especially regarding audio capture buffer size or available formats on visionOS) would be very helpful.
Thank you in advance!
Best regards.
I’m seeing what appears to be an iOS audio-session issue that occurs only when a phone call happens while the app is in the background.
API: AVAudioSession, AVAudioRecorder
Background Modes: Audio enabled (UIBackgroundModes = audio)
Category: .playAndRecord
Microphone permission: granted
Expected Behavior
If the app is recording audio in the background and a phone call interrupts it:
AVAudioSession.interruptionNotification(.began) fires
Call ends
AVAudioSession.interruptionNotification(.ended) fires
App should be able to re-activate its audio session and resume or restart recording
Apple documentation suggests this should be supported for background audio apps.
Actual Behavior
When the app is in the background and phone call is ended:
AVAudioSession.interruptionNotification(.ended) does fire
Attempting to reactivate the audio session always fails:
Error Domain=NSOSStatusErrorDomain
Code=560557684 ("!int")
"Session activation failed"
The session appears to remain permanently “interrupted”
Retrying activation (with delays) does not help
Recreating AVAudioRecorder does not help
Reactivation works only after the app is opened again
I've filed this as FB21446798 but figured I'd post here too.
In the first build of macOS 26.3, playback via ApplicationMusicPlayer is completely broken. When starting playback of anything at all, the console shows the following error:
applicationController: xpc service connection interrupted
Failed to obtain remoteObject: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service created from an endpoint was invalidated from this process." UserInfo={NSDebugDescription=The connection to service created from an endpoint was invalidated from this process.}
Failed to prepareToPlay with error: Error Domain=MPMusicPlayerControllerErrorDomain Code=10 "(null)" UserInfo={NSUnderlyingError=0xc92910ff0 {Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service created from an endpoint was invalidated from this process." UserInfo={NSDebugDescription=The connection to service created from an endpoint was invalidated from this process.}}}
In addition, several crash logs for RemotePlayerService are generated, showing my app as the parent process.
This issue is 100% repeatable. No matter how I load the queue, whether it’s catalog or library content, any variation I can think of all fails like this.
I really hope this can be fixed before 26.3 comes out, otherwise my app will be totally unusable. 😅
After upgrading to watchOS 26, users report that when playing music on Apple Watch, if a fitness reminder is received, the music automatically pauses and users need to manually tap the play button to resume music playback. This phenomenon occurs with multiple music and podcast apps.
This issue did not exist before the upgrade. We would like to know if this is an Apple bug or if there are any special development configurations needed?"
Hello everyone,
I'm implementing the new AVInputPickerInteraction API on iOS 26 to allow users to select their microphone from a custom settings menu before recording.
The implementation seems correct, but I'm encountering a strange issue where the input selection immediately reverts to the previous device.
The Situation:
The picker is presented correctly via a manual call to .present(). I can see all available inputs (e.g., "iPhone Microphone" and "AirPods").
The current input is "iPhone Microphone".
I tap on "AirPods".
The UI updates to show "AirPods" as selected for a fraction of a second, then immediately jumps back to "iPhone Microphone".
The same thing happens in reverse.
It seems like the system is automatically reverting the audio route change requested by the picker.
My Implementation:
My setup follows the standard pattern discussed in the WWDC sessions.
Setup Code:
This setup is performed once before the user can trigger the picker.
@available(iOS 26.0, *)
var inputPickerInteraction: AVInputPickerInteraction?
// Note: The AVAudioSession is configured to .playAndRecord
// and set to active elsewhere in the code before this setup is called.
if #available(iOS 26.0, *) {
// Setup the picker
let picker = AVInputPickerInteraction()
self.inputPickerInteraction = picker
self.view.addInteraction(picker) // Added to establish context
}
Presentation Code:
When a user selects "Change Input" from my custom settings menu, I call .present() on the main thread.
// In a delegate method from a custom menu
if #available(iOS 26.0, *) {
DispatchQueue.main.async {
self.inputPickerInteraction?.present(animated: true)
}
}
What I've already checked:
The AVAudioSession is active and its category is .playAndRecord.
The inputPickerInteraction object is not nil.
The .present() method is being called on the main thread.
The picker is added to a view using view.addInteraction() in the setup phase.
I've reviewed my code to ensure there is no other logic that could be manually resetting the AVAudioSession's preferred input.
Has anyone else experienced this behavior? I suspect this might be a bug in the new API, but I want to make sure I'm not missing a crucial step in managing the AVAudioSession state.
Any insights or potential workarounds would be greatly appreciated.
Thank you.
Topic:
Media Technologies
SubTopic:
Audio