I’m using the shared instance of AVAudioSession. After activating it with .setActive(true), I observe the outputVolume, and it correctly reports the device’s volume.
However, after deactivating the session using .setActive(false), changing the volume, and then reactivating it again, the outputVolume returns the previous volume (before deactivation), not the current device volume. The correct volume is only reported after the user manually changes it again using physical buttons or Control Center, which triggers the observer.
What I need is a way to retrieve the actual current device volume immediately after reactivating the audio session, even on the second and subsequent activations.
Disabling and re-enabling the audio session is essential to how my application functions.
I’ve tested this behavior with my colleagues, and the issue is consistently reproducible on iOS 18.0.1, iOS 18.1, iOS 18.3, iOS 18.5 and iOS 18.6.2. On devices running iOS 17.6.1 and iOS 16.0.3, outputVolume correctly reflects the current volume immediately after calling .setActive(true) multiple times.
AVAudioSession
RSS for tagUse the AVAudioSession object to communicate to the system how you intend to use audio in your app.
Posts under AVAudioSession tag
81 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi everyone,
I’m testing audio recording on an iPhone 15 Plus using AVFoundation.
Here’s a simplified version of my setup:
let settings: [String: Any] = [
AVFormatIDKey: Int(kAudioFormatLinearPCM),
AVSampleRateKey: 8000,
AVNumberOfChannelsKey: 1,
AVLinearPCMBitDepthKey: 16,
AVLinearPCMIsFloatKey: false
]
audioRecorder = try AVAudioRecorder(url: fileURL, settings: settings)
audioRecorder?.record()
When I check the recorded file’s sample rate, it logs:
Actual sample rate: 8000.0
However, when I inspect the hardware sample rate:
try session.setCategory(.playAndRecord, mode: .default)
try session.setActive(true)
print("Hardware sample rate:", session.sampleRate)
I consistently get:
`Hardware sample rate: 48000.0
My questions are:
Is the iPhone mic actually capturing at 8 kHz, or is it recording at 48 kHz and then downsampling to 8 kHz internally?
Is there any way to force the hardware to record natively at 8 kHz?
If not, what’s the recommended approach for telephony-quality audio (true 8 kHz) on iOS devices?
Thanks in advance for your guidance!
Hello,
I’m new here. I'm developing an iOS app and I’d like to know whether it is possible to detect if a phone call is being recorded by another app running in the background.
I’ve already reviewed the documentation for CallKit and AVAudioSession, but I couldn’t find anything related. My expectation was that iOS might provide some callback or API to indicate if a call is being recorded (third-party apps), but so far I haven’t found a way.
My questions are:
Does iOS expose any API to detect if a call is being recorded?
If not, is there any indirect, Apple's policy compliant method (e.g., microphone usage events) that can be relied upon?
Or is this something that iOS explicitly prevents for privacyreasons?
Expecting solutions that align with Apple’s policies and would be accepted under the App Store Review Guidelines.
Thanks in advance for any guidance.
AVAudioSessionCategoryOptionAllowBluetooth is marked as deprecated in iOS 8 in iOS 26 beta 5 when this option was not deprecated in iOS 18.6. I think this is a mistake and the deprecation is in iOS 26. Am I right?
It seems that the substitute for this option is "AVAudioSessionCategoryOptionAllowBluetoothHFP". The documentation does not make clear if the behaviour is exactly the same or if any difference should be expected... Has anyone used this option in iOS 26? Should I expect any difference with the current behaviour of "AVAudioSessionCategoryOptionAllowBluetooth"?
Thank you.
Two issues:
No matter what I set in
try audioSession.setPreferredSampleRate(x)
the sample rate on both iOS and macOS is always 48000 when the output goes through the speaker, and 24000 when my Airpods connect to an iPhone/iPad.
Now, I'm checking the current output loudness to animate a 3D character, using
mixerNode.installTap(onBus: 0, bufferSize: y, format: nil) { [weak self] buffer, time in
Task { @MainActor in
// calculate rms and animate character accordingly
but any buffer size under 4800 is just ignored and the buffers I get are 4800 sized.
This is ok, when the sampleRate is currently 48000, as 10 samples per second lead to decent visual results.
But when AirPods connect, the samplerate is 24000, which means only 5 samples per second, so the character animation looks lame.
My AVAudioEngine setup is the following:
audioEngine.connect(playerNode, to: pitchShiftEffect, format: format)
audioEngine.connect(pitchShiftEffect, to: mixerNode, format: format)
audioEngine.connect(mixerNode, to: audioEngine.outputNode, format: nil)
Now, I'd be fine if the outputNode runs at whatever if it needs, as long as my tap would get at least 10 samples per second.
PS: Specifying my favorite format in the
let format = AVAudioFormat(standardFormatWithSampleRate: 48_000, channels: 2)!
mixerNode.installTap(onBus: 0, bufferSize: y, format: format)
doesn't change anything either
After updating to iOS 18.5, we’ve observed that outgoing audio from our app intermittently stops being transmitted during VoIP calls using AVAudioSession configured with .playAndRecord and .voiceChat. The session is set active without errors, and interruptions are handled correctly, yet audio capture suddenly ceases mid-call. This was not observed in earlier iOS versions (≤ 18.4). We’d like to confirm if there have been any recent changes in AVAudioSession, CallKit, or related media handling that could affect audio input behavior during long-running calls.
func configureForVoIPCall() throws {
try setCategory(
.playAndRecord, mode: .voiceChat,
options: [.allowBluetooth, .allowBluetoothA2DP, .defaultToSpeaker])
try setActive(true)
}
Since iOS 18, the system setting “Allow Audio Playback” (enabled by default) allows third-party app audio to continue playing while the user is recording video with the Camera app. This has created a problem for the app I’m developing.
➡️ The problem:
My app plays continuous audio in both foreground and background states. If the user starts recording video using the iOS Camera app, the app’s audio — still playing in the background — gets captured in the video — obviously an unintended behavior.
Yes, the user could stop the app manually before starting the video recording, but that can’t be guaranteed. As a developer, I need a way to stop the app’s audio before the video recording begins.
So far, I haven’t found a reliable way to detect when video recording starts if ‘Allow Audio Playback’ is ON.
➡️ What I’ve tried:
— AVAudioSession.interruptionNotification → doesn’t fire
— devicesChangedEventStream → not triggered
I don’t want to request mic permission (app doesn’t use mic). also, disabling the app from playing audio in the background isn’t an option as it is a crucial part of the user experience
➡️ What I need:
A reliable, supported way to detect when the Camera app begins video recording, without requiring mic access — so I can stop audio and avoid unintentional overlap with the user’s recordings.
Any official guidance, workarounds, or AVFoundation techniques would be greatly appreciated.
Thanks.
Context:
I am currently developing an app using the Push-to-Talk (PTT) framework. I have reviewed both the PTT framework documentation and the CallKit demo project to better understand how to properly manage audio session activation and AVAudioEngine setup.
I am not activating the audio session manually. The audio session configuration is handled in the incomingPushResult or didBeginTransmitting callbacks from the PTChannelManagerDelegate.
I am using a single AVAudioEngine instance for both input and playback. The engine is started in the didActivate callback from the PTChannelManagerDelegate. When I receive a push in full duplex mode, I set the active participant to the user who is speaking.
Issue
When I attempt to talk while the other participant is already speaking, my input tap on the input node takes a few seconds to return valid PCM audio data. Initially, it returns an empty PCM audio block.
Details:
The audio session is already active and configured with .playAndRecord.
The input tap is already installed when the engine is started.
When I talk from a neutral state (no one is speaking), the system plays the standard "microphone activation" tone, which covers this initial delay. However, this does not happen when I am already receiving audio.
Assumptions / Current Setup
Because the audio session is active in play and record, I assumed that microphone input would be available immediately, even while receiving audio.
However, there seems to be a delay before valid input is delivered to the tap, only occurring when switching from a receive state to simultaneously talking.
Questions
Is this expected behavior when using the PTT framework in full duplex mode with a shared AVAudioEngine?
Should I be restarting or reconfiguring the engine or audio session when beginning to talk while receiving audio?
Is there a recommended pattern for managing microphone readiness in this scenario to avoid the initial empty PCM buffer?
Would using separate engines for input and output improve responsiveness?
I would like to confirm the correct approach to handling simultaneous talk and receive in full duplex mode using PTT framework and AVAudioEngine. Specifically, I need guidance on ensuring the microphone is ready to capture audio immediately without the delay seen in my current implementation.
Relevant Code Snippets
Engine Setup
func setup() {
let input = audioEngine.inputNode
do {
try input.setVoiceProcessingEnabled(true)
} catch {
print("Could not enable voice processing \(error)")
return
}
input.isVoiceProcessingAGCEnabled = false
let output = audioEngine.outputNode
let mainMixer = audioEngine.mainMixerNode
audioEngine.connect(pttPlayerNode, to: mainMixer, format: outputFormat)
audioEngine.connect(beepNode, to: mainMixer, format: outputFormat)
audioEngine.connect(mainMixer, to: output, format: outputFormat)
// Initialize converters
converter = AVAudioConverter(from: inputFormat, to: outputFormat)!
f32ToInt16Converter = AVAudioConverter(from: outputFormat, to: inputFormat)!
audioEngine.prepare()
}
Input Tap Installation
func installTap() {
guard AudioHandler.shared.checkMicrophonePermission() else {
print("Microphone not granted for recording")
return
}
guard !isInputTapped else {
print("[AudioEngine] Input is already tapped!")
return
}
let input = audioEngine.inputNode
let microphoneFormat = input.inputFormat(forBus: 0)
let microphoneDownsampler = AVAudioConverter(from: microphoneFormat, to: outputFormat)!
let desiredFormat = outputFormat
let inputFramesNeeded = AVAudioFrameCount((Double(OpusCodec.DECODED_PACKET_NUM_SAMPLES) * microphoneFormat.sampleRate) / desiredFormat.sampleRate)
input.installTap(onBus: 0, bufferSize: inputFramesNeeded, format: input.inputFormat(forBus: 0)) { [weak self] buffer, when in
guard let self = self else { return }
// Output buffer: 1920 frames at 16kHz
guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: desiredFormat, frameCapacity: AVAudioFrameCount(OpusCodec.DECODED_PACKET_NUM_SAMPLES)) else { return }
outputBuffer.frameLength = outputBuffer.frameCapacity
let inputBlock: AVAudioConverterInputBlock = { inNumPackets, outStatus in
outStatus.pointee = .haveData
return buffer
}
var error: NSError?
let converterResult = microphoneDownsampler.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock)
if converterResult != .haveData {
DebugLogger.shared.print("Downsample error \(converterResult)")
} else {
self.handleDownsampledBuffer(outputBuffer)
}
}
isInputTapped = true
}
I'm developing a safety-critical monitoring app that needs to fetch data from government APIs every 30 minutes and trigger emergency audio alerts for threshold violations.
The app must work reliably in background since users depend on it for safety alerts even while sleeping.
Main Challenge: iOS background limitations seem to prevent consistent 30-minute intervals. Standard BGTaskScheduler and timers get suspended after a few minutes in background.
Question: What's the most reliable approach to ensure consistent 30-minute background monitoring for a safety-critical app where missed alerts could have serious consequences?
Are there special entitlements or frameworks for emergency/safety applications?
The app needs to function like an alarm clock - working reliably even when backgrounded with emergency audio override capabilities.
Topic:
App & System Services
SubTopic:
Processes & Concurrency
Tags:
Network
AVAudioSession
Background Tasks
Hello all,
I have an application that uses broadcast extension of Replay kit to record the entire screen and mic sound to a file on iOS. Up until some months ago, everything was smooth. Currently I am facing the following issue.
If another app uses the microphone, then I loose the sound and never comes back. In order to debug this issue I have added a log to the processSampleBuffer, that logs a text each time it receives a .audioMic buffer type. I start the recording and everything works as expected. Later on, I go onto an app that uses microphone, and then I do not get any log for audioMic. I stop the recording on that app, but the sound never comes back. This as a result makes my video file to not have any sound at all.
In the above context I also noticed that even with Photos app broadcast extension, if you start recording a video, and you go to Speech to text feature of the keyboard, then the sound is joined. While the STT is on, there is no sound and the sound of whatever comes after STT stops is joined to the sound before the STT starts, so I guess that this is something general.
Also on the same research I did, I saw that google Meet app does not allow any microphone to be used from another app while you are in a meet (Even the STT is grayed out).
I would like to know my options here. What can I do to have a valid video file with sound? How can I not allow other apps to use the microphone while my app is recording? Is there any entitlement? How does Google Meet do that?
P.S. I have added an observer to observe the interruptions for the session and the type .began runs, but the type .ended does not, so I can not actually set the AVAudioSession to active again.
Hello everyone,
I’m new to Swift development and have been working on an audio module that plays a specific sound at regular intervals - similar to a workout timer that signals switching exercises every few minutes.
Following AVFoundation documentation, I’m configuring my audio session like this:
let session = AVAudioSession.sharedInstance()
try session.setCategory(
.playback,
mode: .default,
options: [.interruptSpokenAudioAndMixWithOthers, .duckOthers]
)
self.engine.attach(self.player)
self.engine.connect(self.player, to: self.engine.outputNode, format: self.audioFormat)
try? session.setActive(true)
When it’s time to play cues, I schedule playback on a DispatchQueue:
// scheduleAudio uses DispatchQueue
self.scheduleAudio(at: interval.start) {
do {
try audio.engine.start()
audio.node.play()
for sample in interval.samples {
audio.node.scheduleBuffer(sample.buffer, at: AVAudioTime(hostTime: sample.hostTime))
}
} catch {
print("Audio activation failed: \(error)")
}
}
This works perfectly in the foreground. But once the app goes into the background, the scheduled callback runs, yet the audio engine fails to start, resulting in an error with code 561015905.
Interestingly, if the app is already playing audio before going to the background, the scheduled sounds continue to play as expected.
I have added the required background audio mode to my Info plist file by including the key UIBackgroundModes with the value audio.
Is there anything else I should configure? What is the best practice to play periodic audio when the app runs in the background? How do apps like turn-by-turn navigation handle continuous audio playback in the background?
Any advice or pointers would be greatly appreciated!
In my navigation CarPlay app I am needing to capture voice input and process that into text. Is there a built in way to do this in CarPlay?
I did not find one, so I used the following, but I am running into issues where the AVAudioSession will throw an error when I am trying to set active to false after I have captured the audio.
public func startRecording(completionHandler: @escaping (_ completion: String?) -> ()) throws {
// Cancel the previous task if it's running.
if let recognitionTask = self.recognitionTask {
recognitionTask.cancel()
self.recognitionTask = nil
}
// Configure the audio session for the app.
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.record, mode: .default, options: [.duckOthers, .interruptSpokenAudioAndMixWithOthers])
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
let inputNode = self.audioEngine.inputNode
// Create and configure the speech recognition request.
self.recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
guard let recognitionRequest = self.recognitionRequest else { fatalError("Unable to created a SFSpeechAudioBufferRecognitionRequest object") }
recognitionRequest.shouldReportPartialResults = true
// Keep speech recognition data on device
recognitionRequest.requiresOnDeviceRecognition = true
// Create a recognition task for the speech recognition session.
// Keep a reference to the task so that it can be canceled.
self.recognitionTask = self.speechRecognizer.recognitionTask(with: recognitionRequest) { result, error in
var isFinal = false
if let result = result {
// Update the text view with the results.
let textResult = result.bestTranscription.formattedString
isFinal = result.isFinal
let confidence = result.bestTranscription.segments[0].confidence
if confidence > 0.0 {
isFinal = true
completionHandler(textResult)
}
}
if error != nil || isFinal {
// Stop recognizing speech if there is a problem.
self.audioEngine.stop()
do {
try audioSession.setActive(false, options: .notifyOthersOnDeactivation)
} catch {
print(error)
}
inputNode.removeTap(onBus: 0)
self.recognitionRequest = nil
self.recognitionTask = nil
if error != nil {
completionHandler(nil)
}
}
}
// Configure the microphone input.
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in
self.recognitionRequest?.append(buffer)
}
self.audioEngine.prepare()
try self.audioEngine.start()
}
Again, is there a build-in method to capture audio in CarPlay? If not, why would the AVAudioSession throw this error?
Error Domain=NSOSStatusErrorDomain Code=560030580 "Session deactivation failed" UserInfo={NSLocalizedDescription=Session deactivation failed}
Hello.
My team and I think we have an issue where our app is asked to gracefully shutdown with a following SIGTERM. As we’ve learned, this is normally not an issue. However, it seems to also be happening while our app (an audio streamer) is actively playing in the background.
From our perspective, starting playback is indicating strong user intent. We understand that there can be extreme circumstances where the background audio needs to be killed, but should it be considered part of normal operation? We hope that’s not the case.
All we see in the logs is the graceful shutdown request. We can say with high certainty that it’s happening though, as we know that playback is running within 0.5 seconds of the crash, without any other tracked user interaction.
Can you verify if this is intended behavior, and if there’s something we can do about it from our end. From our logs it doesn’t look to be related to either memory usage within the app, or the system as a whole.
Best,
John
We’ve encountered a reproducible issue where the iPhone fails to reconnect to a Wi-Fi access point under the following conditions:
The device is connected to a 2.4GHz Wi-Fi network.
A Bluetooth audio accessory is connected (e.g. headset).
AVAudioSession is active (such as during a voice call or when using the Voice Memos app).
The user moves away from the access point, causing a disconnect.
Upon returning within range, the access point is no longer recognized or reconnected while AVAudioSession remains active.
However, if the Bluetooth device is disconnected or the AVAudioSession is deactivated, the Wi-Fi access point is immediately recognized again.
We confirmed this behavior not only in my app but also using Apple's built-in Voice Memos app, suggesting this is not specific to our implementation.
It appears that the Wi-Fi system deprioritizes reconnection while AVAudioSession is engaged. Could this be by design? Or is this a known issue or limitation with Wi-Fi and AVAudioSession interaction?
Test Environment:
Device: iPhone 13 mini
iOS: 17.5.1
Wi-Fi: 2.4GHz band
Accessories: Bluetooth headset
We’d appreciate clarification on whether this is expected behavior or a bug. Thank you!
I would like to inquire about the feasibility of developing an iOS application with the following requirements:
The app must support real-time audio communication based on UDP.
It needs to maintain a TCP signaling connection, even when the device is locked.
The app will run only on selected devices within a controlled (closed) environment, such as company-managed iPads or iPhones.
Could you please clarify the following:
Is it technically possible to maintain an active TCP connection when the device is locked?
What are the current iOS restrictions or limitations for background execution, particularly related to networking and audio?
Are there any recommended APIs or frameworks (such as VoIP, PushKit, or Background Modes) suitable for this type of application?
I am currently developing an app for the Apple Watch. In RTPController.swift, I
handle the sending, receiving, and playback of audio, and the specific processes
are as follows:
Overview of the current implementation:
Audio processing: Audio processing is performed by setting the AVAudioSession to
the playAndRecord category and voiceChat mode within RTPController, and by
activating the AVAudioEngine.
Audio reception: RTP packets (audio data) are received over the network within
the setupConnection() method of RTPController.
Audio playback: The received audio data is passed to the playSound(data:) method
and played back through the AVAudioEngine and AVAudioPlayerNode.
Xcode Capabilities settings:
Signing & Capabilities Background Modes:
Audio, AirPlay, and Picture in Picture
Voice over IP
Workout processing Privacy descriptions in Info.plist:
Privacy - Health Share Usage Description
Privacy - Health Update Usage Description
Privacy - Health Records Usage Description
Question 1: When the digital crown is pressed during a call, a message appears
on the screen stating, "End Call to Continue," and the call cannot be moved to
the background. As a result, it is not possible to operate other apps while on a
call. Is this behavior due to the specifications of CallKit?
Question 2: Our app stops communication when it goes into the background, but
the walkie-talkie app on the Apple Watch can transition to the background by
pressing the digital crown during a call, allowing it to continue receiving and
playing the other party's audio while in the background. To achieve background
transition during a call and audio reception and playback in the background, is
the current implementation of RTPController and the enabled background modes
insufficient?
Best regards.
I have an audio issue on iPhone 15 Pro (iOS18.5).
After some steps, the new call will be on one-way audio status, but tap mute and unmute will back to normal.
See the attached video and check the "yellow dot indicator" for the audio status.
Video link:
https://youtube.com/shorts/DqYIIIqtMKI?feature=share
I have a similar issue on iOS15 and iOS16, and no issue on iOS17, but now I have this issue on iOS18 with dynamic island model devices.
Please check. Thanks.
I am implementing the new Push to talk framework and I found an issue where channelManager(:didActivate:) is not called after I immediately return a NOT NIL activeRemoteParticipant from incomingPushResult.
I have tested it and it could play the PTT audio in foreground and background.
This issue is only occurring when I join the PTT Channel from the app foreground, then kill the app.
The channel gets restored via channelDescriptor(restoredChannelUUID:).
After the channel gets restored, I send PTT push.
I can see that my device is receiving the incomingPushResult and returning the activeRemotePartipant and the notification panel is showing that A is speaking - but channelManager(:didActivate:) never gets called. Thus resulting in no audio being played.
Rejoining the channel fixes the issue. And reopening the app also seems to fix the issue.
My app inputs electrical waveforms from an IV485B39 2 channel USB device using an AVAudioSession. Before attempting to acquire data I make sure the input device is available as follows:
AVAudiosSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory :AVAudioSessionCategoryRecord error:&err];
NSArray *inputs = [audioSession availableInputs];
I have been using this code for about 10 years.
My app is scriptable so a user can acquire data from the IV485B29 multiple times with various parameter settings (sampling rates and sample duration). Recently the scripts have been failing to complete and what I have notice that when it fails the list of available inputs is missing the USBAudio input. While debugging I have noticed that when working properly the list of inputs includes both the internal microphone as well as the USBAudio device as shown below.
VIB_TimeSeriesViewController:***Available inputs = (
"<AVAudioSessionPortDescription: 0x11584c7d0, type = MicrophoneBuiltIn; name = iPad Microphone; UID = Built-In Microphone; selectedDataSource = Front>",
"<AVAudioSessionPortDescription: 0x11584cae0, type = USBAudio; name = 485B39 200095708064650803073200616; UID = AppleUSBAudioEngine:Digiducer.com :485B39 200095708064650803073200616:000957 200095708064650803073200616:1; selectedDataSource = (null)>"
)
But when it fails I only see the built in microphone.
VIB_TimeSeriesViewController:***Available inputs = (
"<AVAudioSessionPortDescription: 0x11584cef0, type = MicrophoneBuiltIn; name = iPad Microphone; UID = Built-In Microphone; selectedDataSource = Front>"
)
If I only see the built in microphone I immediately repeat the three lines of code and most of the "inputs" contains both the internal microphone and the USBAudioDevice
AVAudiosSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory :AVAudioSessionCategoryRecord error:&err];
NSArray *inputs = [audioSession availableInputs];
This fix always works on my M2 iPadPro and my iPhone 14 but some of my customers have older devices and even with 3 tries they still get faults about 1 in 10 tries.
I rolled back my code to a released version from about 12 months ago where I know we never had this problem and compiled it against the current libraries and the problem still exists. I assume this is a problem caused by a change in the AVAudioSession framework libraries. I need to find a way to work around the issue or get the library fixed.
Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2
We're using AVAudioPlayer to play a sound when a button is tapped. In our use case, this button can be tapped very frequently — roughly every 0.1 to 0.2 seconds. Each tap triggers the following function:
var audioPlayer: AVAudioPlayer?
func soundPlay(resource: String, type: String){
guard let path = Bundle.main.path(forResource: resource, ofType: type) else {
return
}
do {
audioPlayer = try AVAudioPlayer(contentsOf: URL(fileURLWithPath: path))
audioPlayer!.delegate = self
try audioSession.setCategory(.playback)
} catch {
return
}
self.audioPlayer!.play()
}
The issue is that under high-frequency tapping (especially around 0.1–0.15s intervals), the app occasionally crashes. The crash does not occur every time, but it happens randomly — sometimes within 30 seconds, within 1 minute, or even 3 minutes of continuous tapping.
Interestingly, adding a delay of 0.2 seconds between button taps seems to prevent the crash entirely. Delays shorter than 0.2 seconds (e.g.,0.15s,0.18s) still result in occasional crashes.
My questions are:
**Is this expected behavior from AVAudioPlayer or AVAudioSession?
Could this be a known issue or a limitation in AVFoundation?
Is there any documentation or guidance on handling frequent sound playback safely?**
Any insights or recommendations on how to handle rapid, repeated audio playback more reliably would be appreciated.