Post

Replies

Boosts

Views

Activity

Reply to How to safely switch between mic configurations on iOS?
The call to setVoiceProcessingEnabled(isEnabled: Bool) is synchronous, it triggers a configuration change in the engine, so you must ensure that you access formats from input and output node after changing the voice processing enabled state to ensure you’re configuring your graph with the correct formats. Additionally there is some sample code available here as part of the AVEchoTouch project - https://developer.apple.com/documentation/avfaudio/using-voice-processing?language=objc
Topic: Media Technologies SubTopic: Audio Tags:
3w
Reply to downsampling in AVAudioEngine graph
From what I have observed, downsampling or upsampling is automatic, if you use a mixer. Specifically what I observed was that one end of the mixer is attached to the input node, which was running at 44.1KHz. The other end of the mixer was connected to an AVAudioPlayerNode running at 48KHz. I could hear the network transmitted audio coming out of the speaker. This confused me at first until I looked at the specific documentation for AVAudioMixer: "The mixer accepts input at any sample rate and efficiently combines sample rate conversions. It also accepts any channel count and correctly upmixes or downmixes to the output channel count."
Topic: Media Technologies SubTopic: Audio Tags:
Oct ’25
Reply to Handling AVAudioEngine Configuration Change
@milesegan Were there any memory management issues switching from AVAudioPlayerNode to AVAudioSourceNode? I'm using the player node now and am having issues when the audio engine goes through a configuration change. When this happens I stop the engine, remove the player node(s), re-attach and re-connect the player nodes, and then restart the engine. I wrote this code before realizing source nodes were a thing. I'm hoping that using a source mode makes things simpler and require less dynamic coordination. My thinking is that I can have the requisite number of source nodes connected to a mixer and just leave that configuration around for the duration of my app. Then, when one of my two or three dedicated inputs comes online, I can feed buffers into the source nodes and not worry about adding and removing player nodes. From your experience, does this sound like it would work? Would you be willing to share some code showing how you configure your engine with the source node?
Topic: Media Technologies SubTopic: Audio Tags:
Oct ’25
Reply to Is AVAudioPCMFormatFloat32 required for playing a buffer with AVAudioEngine / AVAudioPlayerNode
In my experience, things only consistently work when using Float32 non-interleaved samples. This seems to be the requirement for the audio engine input and output nodes as well asl playing back audio with the player node. I am also recording data to the disk in this format. Any time I tried to use Int16 interleaved data, the API results were negative. I had to perform my own conversions to and from these two formats because the third-party library I was using for remote-conference audio only accepted Int16 interleaved data in both directions.
Topic: Media Technologies SubTopic: Audio Tags:
Oct ’25
Reply to When to set AVAudioSession's preferredInput?
Probably way too late but perhaps someone else will benefit from the discussion. I have observed that the system will always jump to the recently plugged-in microphone. I assume this is because they assume that a person who just plugged in a microphone wants to use said microphone immediately. My suggestion is to monitory the route change notifications and re-assert your wishes by calling setPreferredInput again. I have not tested this but give it a try.
Topic: Media Technologies SubTopic: Audio Tags:
Oct ’25
Reply to DriverKit driver doesn't appear in Settings when installed with iPad app
The Apple defect for iOS was solved last year. If you are still having issues on iOS, make sure you are running the latest version of iOS as a first step. If that requirement has been met, you have some other technical issue prevent preventing your driver from properly being recognized by the system. If you are using macOS, there may be other issues such as your Xcode configuration, or not having your system properly set up for driver development. I don't have all the details for macOS, but the documentation provided by Apple should help in this instance. It might help if you created a separate issue perhaps linked to this one where you completely describe your situation and the problem you're experiencing. Be sure to include the code and configuration snippets so we can see exactly what you are doing in order to properly register and activate your driver.
Topic: App & System Services SubTopic: Drivers Tags:
Apr ’25
Reply to DriverKit: Check that driver is enabled on iPadOS
I too have the same requirement. With my USB driver I cannot tell the difference between the device being unplugged and the driver not being activated via the Settings app. I need to be able to direct the user to flip the switch when I know for certain that it has not yet been flipped. Neither the IOKit or SystemExtensions frameworks are available in iOS. Does anyone know of a workaround for this issue?
Topic: App & System Services SubTopic: Core OS Tags:
Jan ’25
Reply to IOServiceOpen fails with -308 Error (smUnExBusError)
If I do what I think I need to be doing to unmap the memory, when I try to open the service again, it fails. If I skip the step where I do that unmapping, the service opens successfully. Are you saying everything works if you don't unmap the memory? That is, when you open the device again without attempting to un-map memory, can you communication successfully to the device and proceed as normal? The way this is worded, it is unclear to me.
Topic: App & System Services SubTopic: Drivers Tags:
Nov ’24
Reply to AVAudioPCMBuffer Memory Management
Did you ever figure this out? I've been doing the same thing. I don't get any crashes but when I hand the buffers of to LiveSwitch for playback, there is not audio signal. I receive the buffer from a tap on bus zero. I send the buffer to the publisher. Buffer is received, potentially by up to two consumers (currently one). Buffer has to be converted using AVAudioConverter from Float32 to Int16, which is required for consumption by LiveSwitch APIs. Buffer memory converted to NSMutableData (required by LiveSwitch) Buffer wrapped / converted in FMLiveSwitchAudioFrame. Buffer raised to LiveSwitch for processing. Result: No signal.
Topic: Media Technologies SubTopic: Audio Tags:
Aug ’24
Reply to AVCam Example: Can I use a Actor instead of a DispatchQueue for capture session activity?
@enodev Thank you for your response. However, the answer seems to skirt the question. I don't want to use both Actor and DispatchQueue, perhaps in some sort of wrapped or nested form. I want to use one in place of the other. Your answer would seem to imply that this is not possible and that using an Actor with AVFoundation APIs that must run on something other than main, for blocking reasons, would still need a DispatchQueue.
May ’24