Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

Music Kit initialisation, Uncaught TypeError: Cannot read properties of undefined (reading 'node')
I'm trying to load Music Kit on the server with solid js. I can confirm that my implementation has been sufficient to return authentication tokens and for MusicKit.isAuthorized to return true. My issue is that if I reload the page, it only succeeds intermittently (perhaps 25% of the time?). My question is - what is wrong with my implementation? Removing the async keyword ensures it loads every time but playing and queuing music no longer works. I'm currently assuming this is an SSR issue but the docs haven't explicitly specified this isn't possible. I have the following boilerplate: export default createHandler( () => ( <StartServer document={({ assets, children, scripts }) => { return ( <html lang="en"> <head> <meta name="apple-music-developer-token" content={authResult.token} /> <meta name="apple-music-app-name" content="app name" /> <meta name="apple-music-app-build" content="1978.4.1" /> {assets} <script src="https://js-cdn.music.apple.com/musickit/v3/musickit.js" async /> </head> <body> <div id="app">{children}</div> {scripts} </body> </html> ) }} /> )) When I first load my app, I'll encounter: musickit.js:13 Uncaught TypeError: Cannot read properties of undefined (reading 'node') at musickit.js:13:10194 at musickit.js:13:140 at musickit.js:13:209 The intermittence signals an issue relating to the async keyword. An expansion on this issue can be found here.
0
0
554
Dec ’24
Help Needed: How to Make iOS Timer More Stable?
I’m currently developing an iOS metronome app using DispatchSourceTimer as the timer. The interval is set very small, around 50 milliseconds, and I’m using CFAbsoluteTimeGetCurrent to calculate the elapsed time to ensure the beat is played within a ±0.003-second margin. The problem is that once the app goes to the background, the timing becomes unstable—it slows down noticeably, then recovers after 1–2 seconds. When coming back to the foreground, it suddenly speeds up, and again, it takes 1–2 seconds to return to normal. It feels like the app is randomly “powering off” and then “overclocking.” It’s super frustrating. I’ve noticed that some metronome apps in the App Store have similar issues, but there’s one called “Professional Metronome” that’s rock solid with no such problems. What kind of magic are they using? Any experts out there who can help? Thanks in advance! P.S. I’ve already enabled background audio permissions. The professional metronome that has no issues: https://link.zhihu.com/?target=https%3A//apps.apple.com/cn/app/pro-metronome-%25E4%25B8%2593%25E4%25B8%259A%25E8%258A%2582%25E6%258B%258D%25E5%2599%25A8/id477960671
2
0
555
Jan ’25
Changing instrument with AVMIDIControlChangeEvent bankSelect
I've been trying to use AVMIDIControlChangeEvent with a bankSelect message type to change the instrument the sequencer uses on a AVMusicTrack with no luck. I started with the Apple AVAEMixerSample, converting the initial setup/loading and portions dealing with the sequencer to Swift. I got that working and playing the "bluesyRiff" and then modified it to play individual notes. So my createAndSetupSequencer looked like func createAndSetupSequencer() { sequencer = AVAudioSequencer(audioEngine: engine) // guard let midiFileURL = Bundle.main.url(forResource: "bluesyRiff", withExtension: "mid") else { // print (" failed guard trying to get URL for bluesyRiff") // return // } let track = sequencer.createAndAppendTrack() var currTime = 1.0 for i: UInt32 in 0...8 { let newNoteEvent = AVMIDINoteEvent(channel: 0, key: 60+i, velocity: 64, duration: 2.0) track.addEvent(newNoteEvent, at: AVMusicTimeStamp(currTime)) currTime += 2.0 } The notes played, so then I also replaced the gs_instruments sound bank with GeneralUser GS MuseScore v1.442 first by trying guard let soundBankURL = Bundle.main.url(forResource: "GeneralUser GS MuseScore v1.442", withExtension: "sf2") else { return} do { try sampler.loadSoundBankInstrument(at: soundBankURL, program: 0x001C, bankMSB: 0x79, bankLSB: 0x08) } catch{.... } This appears to work, the instrument (8 which is "Funk Guitar") plays. If I change to bankLSB: 0x00 I get the "Palm Muted guitar". So I know that the soundfont has these instruments Stuff goes off the rails when I try to change the instruments in createAndSetupSequencer. Putting let programChange = AVMIDIProgramChangeEvent(channel: 0, programNumber: 0x001C) let bankChange = AVMIDIControlChangeEvent(channel: 0, messageType: AVMIDIControlChangeEvent.MessageType.bankSelect, value: 0x00) track.addEvent(programChange, at: AVMusicTimeStamp(1.0)) track.addEvent(bankChange, at: AVMusicTimeStamp(1.0)) just before my add note loop doesn't produce any change. Loading bankLSB 8 (Funk) in sampler.loadSoundBankInstrument and trying to change with bankSelect 0 (Palm muted) in createAndSetupSequencer results in instrument 8 (Funk) playing not Palm Muted. Loading bankLSB 0 (Palm muted) and trying to change with bankSelect 8 (Funk) doesn't work, 0 (Palm muted) plays I also tried sampler.loadInstrument(at: soundBankURL) and then I always get the first instrument in the sound font file (piano)no matter what values I put in my programChange/bankChange I've also changed the time in the track.addEvent to be 0, 1.0, 3.0 etc to no success The sampler.loadSoundBankInstrument specifies two UInt8 parameters, bankMSB and BankLSB while the AVMIDIControlChangeEvent bankSelect value is UInt32 suggesting it might be some combination of bankMSB and BankLSB. But the documentation makes no mention of what this should look like. I tried various combinations of 0x7908, 0X0879 etc to no avail I will also point out that I am able to successfully execute other control change events For example adding if i == 1 { let portamentoOnEvent = AVMIDIControlChangeEvent(channel: 0, messageType: AVMIDIControlChangeEvent.MessageType.portamento, value: 0xFF) track.addEvent(portamentoOnEvent, at: AVMusicTimeStamp(currTime)) let portamentoRateEvent = AVMIDIControlChangeEvent(channel: 0, messageType: AVMIDIControlChangeEvent.MessageType.portamentoTime, value: 64) track.addEvent(portamentoRateEvent, at: AVMusicTimeStamp(currTime)) } does produce a change in the sound. (As an aside, a definition of what portamento time is, other than "the rate of portamento" would be welcome. is it notes/seconds? freq/minute? beats/hour?) I was able to get the instrument to change in a different program using MusicPlayer and a series of MusicTrackNewMIDIChannelEvent on a track but these operate on a MusicTrack not the AVMusicTrack which the sequencer uses. Has anyone been successful in switching instruments through an AVMIDIControlChangeEvent or have any feedback on how to do this?
0
0
347
Mar ’25
Music Keeps cutting off
Everytime I put my AirPods in and connect them to my phone or my Mac or my iPad since the iOS 18.3 update on my devices they’ve been disconnecting without reason, pausing songs I’m in the middle of playing, and only partially reconnecting in one pod and it’s getting really frustrating
1
0
316
Feb ’25
Unable to match music with shazamkit for Android
Hello, i can successfully match music using shazamkit on Apple using SwiftUI, a simple app that let user to load an audio file and exctracts the relative match, while i am unable to match music using shamzamkit on Android. I am trying to make the same simple app but i cannot match music as i get MATCH_ATTEMPT_FAILED every time i try to. I don't know what i am doing wrong but the shazam part in the kotlin Android code is in this method : suspend fun processAudioFileInBackground( filePath: String, developerTokenProvider: DeveloperTokenProvider ) = withContext(Dispatchers.IO) { val bufferSize = 1024 * 1024 val audioFile = FileInputStream(filePath) val byteBuffer = ByteBuffer.allocate(bufferSize) byteBuffer.order(ByteOrder.LITTLE_ENDIAN) var bytesRead: Int while (audioFile.read(byteBuffer.array()).also { bytesRead = it } != -1) { val signatureGenerator = (ShazamKit.createSignatureGenerator(AudioSampleRateInHz.SAMPLE_RATE_44100) as ShazamKitResult.Success).data signatureGenerator.append(byteBuffer.array(), bytesRead, System.currentTimeMillis()) val signature = signatureGenerator.generateSignature() println("Signature: ${signature.durationInMs}") val catalog = ShazamKit.createShazamCatalog(developerTokenProvider, Locale.ENGLISH) val session = (ShazamKit.createSession(catalog) as ShazamKitResult.Success).data val matchResult = session.match(signature) println("MatchResult : $matchResult") setMatchResult(matchResult) byteBuffer.clear() } audioFile.close() } I noticed that changing Locale in catalog creation results in different result as i get NoMatch without exception. Can you please help me with this? Do i need to create a custom catalog?
0
0
109
May ’25
AVMIDIPlayer.play() function crashes on iOS 18
It's only occurs on iOS 18+. Backtrace attached below. Exception Codes: 0x0000000000000000, 0x0000000000000000 Termination Reason: SIGNAL 6 Abort trap: 6 Terminating Process: NoteKeys [24384] Triggered by Thread: 0 Last Exception Backtrace: 0 CoreFoundation 0x1a2d4c7cc __exceptionPreprocess + 164 (NSException.m:249) 1 libobjc.A.dylib 0x1a001f2e4 objc_exception_throw + 88 (objc-exception.mm:356) 2 CoreFoundation 0x1a2e47748 +[NSException raise:format:] + 128 (NSException.m:0) 3 AVFAudio 0x1bd41f4c8 -[AVMIDIPlayer play:] + 300 (AVMIDIPlayer.mm:145) 4 NoteKeys 0x1023c0670 SoundGenerator.playData() + 20 (SoundGenerator.swift:170) 5 NoteKeys 0x1023c0670 EditViewController.playBtnTapped(startIndex:) + 940 (EditViewController.swift:2034) 6 NoteKeys 0x1024497fc specialized Keyboard.playBtnTapped(sender:) + 1904 (Keyboard.swift:1249) 7 NoteKeys 0x10244631c Keyboard.playBtnTapped(sender:) + 4 (<compiler-generated>:0) 8 NoteKeys 0x10244631c @objc Keyboard.playBtnTapped(sender:) + 48 9 UIKitCore 0x1a58739cc -[UIApplication sendAction:to:from:forEvent:] + 100 (UIApplication.m:5816) 10 UIKitCore 0x1a58738a4 -[UIControl sendAction:to:forEvent:] + 112 (UIControl.m:942) 11 UIKitCore 0x1a58736f4 -[UIControl _sendActionsForEvents:withEvent:] + 324 (UIControl.m:1013) 12 UIKitCore 0x1a5fe8d8c -[UIButton _sendActionsForEvents:withEvent:] + 124 (UIButton.m:4198) 13 UIKitCore 0x1a5fea5a0 -[UIControl touchesEnded:withEvent:] + 400 (UIControl.m:692) 14 UIKitCore 0x1a57bb9ac -[UIWindow _sendTouchesForEvent:] + 852 (UIWindow.m:3318) 15 UIKitCore 0x1a57bb3d8 -[UIWindow sendEvent:] + 2964 (UIWindow.m:3641) 16 UIKitCore 0x1a564fb70 -[UIApplication sendEvent:] + 376 (UIApplication.m:12972) 17 UIKitCore 0x1a565009c __dispatchPreprocessedEventFromEventQueue + 1048 (UIEventDispatcher.m:2686) 18 UIKitCore 0x1a5659f3c __processEventQueue + 5696 (UIEventDispatcher.m:3044) 19 UIKitCore 0x1a5552c60 updateCycleEntry + 160 (UIEventDispatcher.m:133) 20 UIKitCore 0x1a55509d8 _UIUpdateSequenceRun + 84 (_UIUpdateSequence.mm:136) 21 UIKitCore 0x1a5550628 schedulerStepScheduledMainSection + 172 (_UIUpdateScheduler.m:1171) 22 UIKitCore 0x1a555159c runloopSourceCallback + 92 (_UIUpdateScheduler.m:1334) 23 CoreFoundation 0x1a2d20328 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 28 (CFRunLoop.c:1970) 24 CoreFoundation 0x1a2d202bc __CFRunLoopDoSource0 + 176 (CFRunLoop.c:2014) 25 CoreFoundation 0x1a2d1ddc0 __CFRunLoopDoSources0 + 244 (CFRunLoop.c:2051) 26 CoreFoundation 0x1a2d1cfbc __CFRunLoopRun + 840 (CFRunLoop.c:2969) 27 CoreFoundation 0x1a2d1c830 CFRunLoopRunSpecific + 588 (CFRunLoop.c:3434) 28 GraphicsServices 0x1eecfc1c4 GSEventRunModal + 164 (GSEvent.c:2196) 29 UIKitCore 0x1a5882eb0 -[UIApplication _run] + 816 (UIApplication.m:3844) 30 UIKitCore 0x1a59315b4 UIApplicationMain + 340 (UIApplication.m:5496) 31 NoteKeys 0x10254bc10 main + 68 (AppDelegate.swift:15) 32 dyld 0x1c870aec8 start + 2724 (dyldMain.cpp:1334) Thanks very much for any help: )
1
0
295
Feb ’25
AVAudioEngine : Split 1x4 channel bus into 4x1 channel busses?
I'm using a 4 channel USB Audio interface, with 4 microphones, and want to process them through 4 independent effect chains. However the output from AVAudioInputNode is a single 4 channel bus. How can I split this into 4 mono busses? The following code splits the input into 4 copies, and routes them through the effects, but each bus contains all four channels. How can I remap the channels to remove the unwanted channels from the bus? I tried using channelMap on the mixer node but that had no effect. I'm currently using this code primarily on iOS but it should be portable between iOS and MacOS. It would be possible to do this through a Matrix Mixer Node, but that seems completely overkill, for such a basic operation. I'm already using a Matrix Mixer to combine the inputs, and it's not well supported in AVAudioEngine. AVAudioInputNode *inputNode=[engine inputNode]; [inputNode setVoiceProcessingEnabled:NO error:nil]; NSMutableArray *micDestinations=[NSMutableArray arrayWithCapacity:trackCount]; for(i=0;i<trackCount;i++) { fixMicFormat[i]=[AVAudioMixerNode new]; [engine attachNode:fixMicFormat[i]]; // And create reverb/compressor and eq the same way... [engine connect:reverb[i] to:matrixMixerNode fromBus:0 toBus:i format:nil]; [engine connect:eq[i] to:reverb[i] fromBus:0 toBus:0 format:nil]; [engine connect:compressor[i] to:eq[i] fromBus:0 toBus:0 format:nil]; [engine connect:fixMicFormat[i] to:compressor[i] fromBus:0 toBus:0 format:nil]; [micDestinations addObject:[[AVAudioConnectionPoint alloc] initWithNode:fixMicFormat[i] bus:0] ]; } AVAudioFormat *inputFormat = [inputNode outputFormatForBus: 1]; [engine connect:inputNode toConnectionPoints:micDestinations fromBus:1 format:inputFormat];
2
0
231
Oct ’25
AVSpeechSynthesisVoices available on device
Hello there! Is there any list of voices that are always available on iOS/iPadOS devices? It seems that AVSpeechSynthesisVoice(identifier: "com.apple.voice.compact.en-US.Samantha") is always available on all devices. I thought that AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Nicky_en-US_compact") and AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Aaron_en-US_compact") were available by default on certain newer devices. Is this true? I also noticed that on the same iPad where I was using those 2 voices (Nicky and Aaron) - when I updated to the iPadOS 26 beta, those voices were no longer available. Any information you can share about which voices should be reliably available on which devices would be extremely helpful for our development. Thanks so much!
0
0
143
Jun ’25
AudioUnit may experience silent capture issues on iPadOS 18.4.1 or 18.5.
Among the millions of users of our online product, we have identified through data metrics that the silent audio data capture rate on iPadOS 18.4.1 or 18.5 has increased abnormally. However, we are unable to reproduce the issue. Has anyone encountered a similar issue? The parameters we used are as follows: AudioSession: category:AVAudioSessionCategoryPlayAndRecord mode:AVAudioSessionModeDefault option:77 preferredSampleRate:48000.000000 preferredIOBufferDuration:0.010000 AudioUnit format.mFormatID = kAudioFormatLinearPCM; format.mSampleRate = 48000.0; format.mChannelsPerFrame = 2; format.mBitsPerChannel = 16; format.mFramesPerPacket = 1; format.mBytesPerFrame = format.mChannelsPerFrame * 16 / 8; format.mBytesPerPacket = format.mBytesPerFrame * format.mFramesPerPacket; format.mFormatFlags = kAudioFormatFlagsNativeEndian | kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsSignedInteger; component.componentType = kAudioUnitType_Output; component.componentSubType = kAudioUnitSubType_RemoteIO; component.componentManufacturer = kAudioUnitManufacturer_Apple; component.componentFlags = 0; component.componentFlagsMask = 0;
0
0
128
Jun ’25
Wi-Fi Access Point Not Reconnecting While AVAudioSession Is Active
We’ve encountered a reproducible issue where the iPhone fails to reconnect to a Wi-Fi access point under the following conditions: The device is connected to a 2.4GHz Wi-Fi network. A Bluetooth audio accessory is connected (e.g. headset). AVAudioSession is active (such as during a voice call or when using the Voice Memos app). The user moves away from the access point, causing a disconnect. Upon returning within range, the access point is no longer recognized or reconnected while AVAudioSession remains active. However, if the Bluetooth device is disconnected or the AVAudioSession is deactivated, the Wi-Fi access point is immediately recognized again. We confirmed this behavior not only in my app but also using Apple's built-in Voice Memos app, suggesting this is not specific to our implementation. It appears that the Wi-Fi system deprioritizes reconnection while AVAudioSession is engaged. Could this be by design? Or is this a known issue or limitation with Wi-Fi and AVAudioSession interaction? Test Environment: Device: iPhone 13 mini iOS: 17.5.1 Wi-Fi: 2.4GHz band Accessories: Bluetooth headset We’d appreciate clarification on whether this is expected behavior or a bug. Thank you!
0
0
137
Jun ’25
Background recording app getting killed by watch dog.. how to avoid?
We have the necessary background recording entitlements, and for many users... do not run into any issues. However, there is a subset of users that routinely get recordings ending.. we have narrowed this down and believe it to be the work of the watch dog. First we removed the entire view hierarchy when app is backgrounded. There is just 'Text("Recording")' This got the CPU usage in profiler down to 0%. We saw massive improvements to recording success rate. We walked away assuming that was enough. However we are still seeing the same sort of crashes. All in the background. We're using Observation to drive audio state changes to a Live Activity. Are those Observations causing the problem? Why doesn't apple provide a better API to background audio? The internet is full of weird issues https://stackoverflow.com/questions/76010213/why-is-my-react-native-app-sometimes-terminated-in-the-background-while-tracking https://stackoverflow.com/questions/71656047/why-is-my-react-native-app-terminating-in-the-background-while-recording-ios-r https://github.com/expo/expo/issues/16807 This is such a terrible user experience. And we have very little visibility into what is happening and why. No where in apple documentation states that in order for background recording to work, the app can only be 'Text("Recording")' It does not outline a CPU or memory threshold. It just kills us.
2
0
393
Mar ’25
How to capture audio from the stream that's playing on the speakers?
Good day, ladies and gents. I have an application that reads audio from the microphone. I'd like it to also be able to read from the Mac's audio output stream. (A bonus would be if it could detect when the Mac is playing music.) I'd eventually be able to figure it out reading docs, but if someone can give a hint, I'd be very grateful, and would owe you the libation of your choice. Here's the code used to set up the AudioUnit: -(NSString*) configureAU { AudioComponent component = NULL; AudioComponentDescription description; OSStatus err = noErr; UInt32 param; AURenderCallbackStruct callback; if( audioUnit ) { AudioComponentInstanceDispose( audioUnit ); audioUnit = NULL; } // was CloseComponent // Open the AudioOutputUnit description.componentType = kAudioUnitType_Output; description.componentSubType = kAudioUnitSubType_HALOutput; description.componentManufacturer = kAudioUnitManufacturer_Apple; description.componentFlags = 0; description.componentFlagsMask = 0; if( component = AudioComponentFindNext( NULL, &description ) ) { err = AudioComponentInstanceNew( component, &audioUnit ); if( err != noErr ) { audioUnit = NULL; return [ NSString stringWithFormat: @"Couldn't open AudioUnit component (ID=%d)", err] ; } } // Configure the AudioOutputUnit: // You must enable the Audio Unit (AUHAL) for input and output for the same device. // When using AudioUnitSetProperty the 4th parameter in the method refers to an AudioUnitElement. // When using an AudioOutputUnit for input the element will be '1' and the output element will be '0'. param = 1; // Enable input on the AUHAL err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &param, sizeof(UInt32) ); chkerr("Couldn't set first EnableIO prop (enable inpjt) (ID=%d)"); param = 0; // Disable output on the AUHAL err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &param, sizeof(UInt32) ); chkerr("Couldn't set second EnableIO property on the audio unit (disable ootpjt) (ID=%d)"); param = sizeof(AudioDeviceID); // Select the default input device AudioObjectPropertyAddress OutputAddr = { kAudioHardwarePropertyDefaultInputDevice, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster }; err = AudioObjectGetPropertyData( kAudioObjectSystemObject, &OutputAddr, 0, NULL, &param, &inputDeviceID ); chkerr("Couldn't get default input device (ID=%d)"); // Set the current device to the default input unit err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &inputDeviceID, sizeof(AudioDeviceID) ); chkerr("Failed to hook up input device to our AudioUnit (ID=%d)"); callback.inputProc = AudioInputProc; // Setup render callback, to be called when the AUHAL has input data callback.inputProcRefCon = self; err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 0, &callback, sizeof(AURenderCallbackStruct) ); chkerr("Could not install render callback on our AudioUnit (ID=%d)"); param = sizeof(AudioStreamBasicDescription); // get hardware device format err = AudioUnitGetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1, &deviceFormat, &param ); chkerr("Could not install render callback on our AudioUnit (ID=%d)"); audioChannels = MAX( deviceFormat.mChannelsPerFrame, 2 ); // Twiddle the format to our liking actualOutputFormat.mChannelsPerFrame = audioChannels; actualOutputFormat.mSampleRate = deviceFormat.mSampleRate; actualOutputFormat.mFormatID = kAudioFormatLinearPCM; actualOutputFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved; if( actualOutputFormat.mFormatID == kAudioFormatLinearPCM && audioChannels == 1 ) actualOutputFormat.mFormatFlags &= ~kLinearPCMFormatFlagIsNonInterleaved; #if __BIG_ENDIAN__ actualOutputFormat.mFormatFlags |= kAudioFormatFlagIsBigEndian; #endif actualOutputFormat.mBitsPerChannel = sizeof(Float32) * 8; actualOutputFormat.mBytesPerFrame = actualOutputFormat.mBitsPerChannel / 8; actualOutputFormat.mFramesPerPacket = 1; actualOutputFormat.mBytesPerPacket = actualOutputFormat.mBytesPerFrame; // Set the AudioOutputUnit output data format err = AudioUnitSetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &actualOutputFormat, sizeof(AudioStreamBasicDescription)); chkerr("Could not change the stream format of the output device (ID=%d)"); param = sizeof(UInt32); // Get the number of frames in the IO buffer(s) err = AudioUnitGetProperty( audioUnit, kAudioDevicePropertyBufferFrameSize, kAudioUnitScope_Global, 0, &audioSamples, &param ); chkerr("Could not determine audio sample size (ID=%d)"); err = AudioUnitInitialize( audioUnit ); // Initialize the AU chkerr("Could not initialize the AudioUnit (ID=%d)"); // Allocate our audio buffers audioBuffer = [self allocateAudioBufferListWithNumChannels: actualOutputFormat.mChannelsPerFrame size: audioSamples * actualOutputFormat.mBytesPerFrame]; if( audioBuffer == NULL ) { [ self cleanUp ]; return [NSString stringWithFormat: @"Could not allocate buffers for recording (ID=%d)", err]; } return nil; } (...again, it would be nice to know if audio output is active and thereby choose the clean output stream over the noisy mic, but that would be a different chunk of code, and my main question may just be a quick edit to this chunk.) Thanks for your attention! ==Dave [p.s. if i get more than one useful answer, can i "Accept" more than one, to spread the credit around?] {pps: of course, the code lines up prettier in a monospaced font!}
1
0
161
Jun ’25