AVAudioEngine & channels

We're using AVAudioEngine and trying to keep things simple in our app, and since we don't need stereo sound, we limit audio processing to a single channel. That works fine since the microphone or engine.inputNode is single channel, and audio files we provide with the app are all single channel.


Unfortunately, the user can select to import audio into the app that is 2 channel. That causes the app to crash with a 'required condition is false: _outputFormat.channelCount == buffer.format.channelCount'


I'm curious if there is an easy fix for this, or whether we have to detect this change in channels, and disconnect nodes and reconnect with the new format. I was hoping that AVAudioEngine could detect the incoming format of an AVAudioPCMBuffer, and handle it appropriately, without much help from me.


If I have to disconnect and reconnect with the new format, how extensively do I need to do this? I have a PCM bufffer going through an AudioPlayerNode to a AVAudioUnitVarispeed and that goes to the mixerNode (a single simple case, and I have more complex ones where multiple AudioUnits are chained together). Do I need to have all the nodes be reset to the new format? Only certain ones?


Doing the following after the graph has already been set up and after the new 2-channel file is loaded, and before scheduling the audioFilePlayer:


  [engine connect:audioFilePlayer to:_varispeed format:audioBuffer.format];

fails with a

kAudioUnitErr_FormatNotSupported = -10868


If I don't have that line in there, and load the 2-channel file after the graph has already beeen loaded for 1-channel operation, gives me:


required condition is false: _outputFormat.channelCount == buffer.format.channelCount


If I have to tear down the graph and redo it in order to make a 2 channel sound to work, would it be better if I did a conversion of the audio, on-the-fly?


Thank you in advance for any tips and pointers.


mz

I think the most efficient way to do what you're discussing is to insert a Mixer node before your Varispeed and use it to convert from x to mono. The mixer can have any number of inputs at any sample rate and channel count and will up or downmix to the output channel count. So that should make life super easy.


So, instead of going from your AVAudioPlayerNode directly into the AVAudioUnitVarispeed insert an AVAudioMixerNode in front:


AVAudioPlayerNode (1 channel) -> AVAudioMixerNode (convert from x format to mono) -> AVAudioUnitVarispeed -> etc.

AVAudioPlayerNode (1 channel) ->

AVAudioPlayerNode (2 channel) ->

etc. etc.


This way there is no need to disconnect/reconnect any node even if the file/buffer format at the player level changes. We support as many mixer nodes in the graph as required, there are no restriction.


Hope that helps!

Please corerct me if I am wrong, but I was under the assumption that after the mixer node, the audio would hit the speaker and be heard. Are you saying that I can have mixer nodes that don't play out the speaker?

You may be thinking of the mainMixerNode / outputNode properties and how the nodes are then connected in the AVAudioEngine object.


While the mainMixerNode is an AVAudioMixerNode and the engine will configure and connect it to the outputNode as described in the API reference, it doesn't have to be the only AVAudioMixerNode in your setup. You can have multiple mixers, only one of which will be the mainMixerNode created for you by the engine.

The error went away when I converted the mono audio file to stereo.

AVAudioEngine & channels
 
 
Q