My app samples the various inputs available on the iPhone and iPad and performs a frequency analysis. In addition to using the internal accelerometer and gyroscope I can also sample the microphone and USB input devices such as accelerometers through the audio input subsystem. The highest sample rate I use with the microphone and USB devices is the 48 KHz of the audio sampling subsystem. This provides a bandwidth of 24 kHz (Nyquist frequency) on the sampled signal. This has worked for many generations of iPhone and iPad until now. When I use my iPhone 14 Pro there is a sharp frequency cutoff at about 8 kHz. I see an artifact at the same frequency when I use the simulators. BUT when I use my 11" iPad Pro, or my current generation iPhone SE I do not see this effect and get good data out to 24 kHz. The iPad Pro does show some rolloff near 24 kHz which is noticeable but not a problem for most applications.
The rolloff at 8 kHz is a serious problem for my customers who are testing equipment vibration and noise. I am wondering if this is related to the new microphone options "Standard", "Voice Isolation", and "Wide Spectrum". But if so, why only on the iPhone 14Pro and the simulators? I have searched the documentation but apparently it is not possible to programmatically change the microphone mode and the Apple documentation on how to use this new feature is lacking.
I am using AVAudioSession and AVAudioRecorder methods to acquire the data through the audio capture hardware. This code has been working well for me for over 10 years so I do not think it is a code problem but it could be a configuration problem because of new hardware in the iPhone 14 although I have not found anything in the documentation.
Examples from various devices and a simulator are shown below for microphone. Does anyone have an idea what may be causing this problem?
iPhoneSE 3rd Gen
iPad Gen 9
iPad Pro 11in
iPhone 14Pro
iPad 10th Generation Simulator
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
My app inputs electrical waveforms from an IV485B39 2 channel USB device using an AVAudioSession. Before attempting to acquire data I make sure the input device is available as follows:
AVAudiosSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory :AVAudioSessionCategoryRecord error:&err];
NSArray *inputs = [audioSession availableInputs];
I have been using this code for about 10 years.
My app is scriptable so a user can acquire data from the IV485B29 multiple times with various parameter settings (sampling rates and sample duration). Recently the scripts have been failing to complete and what I have notice that when it fails the list of available inputs is missing the USBAudio input. While debugging I have noticed that when working properly the list of inputs includes both the internal microphone as well as the USBAudio device as shown below.
VIB_TimeSeriesViewController:***Available inputs = (
"<AVAudioSessionPortDescription: 0x11584c7d0, type = MicrophoneBuiltIn; name = iPad Microphone; UID = Built-In Microphone; selectedDataSource = Front>",
"<AVAudioSessionPortDescription: 0x11584cae0, type = USBAudio; name = 485B39 200095708064650803073200616; UID = AppleUSBAudioEngine:Digiducer.com :485B39 200095708064650803073200616:000957 200095708064650803073200616:1; selectedDataSource = (null)>"
)
But when it fails I only see the built in microphone.
VIB_TimeSeriesViewController:***Available inputs = (
"<AVAudioSessionPortDescription: 0x11584cef0, type = MicrophoneBuiltIn; name = iPad Microphone; UID = Built-In Microphone; selectedDataSource = Front>"
)
If I only see the built in microphone I immediately repeat the three lines of code and most of the "inputs" contains both the internal microphone and the USBAudioDevice
AVAudiosSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory :AVAudioSessionCategoryRecord error:&err];
NSArray *inputs = [audioSession availableInputs];
This fix always works on my M2 iPadPro and my iPhone 14 but some of my customers have older devices and even with 3 tries they still get faults about 1 in 10 tries.
I rolled back my code to a released version from about 12 months ago where I know we never had this problem and compiled it against the current libraries and the problem still exists. I assume this is a problem caused by a change in the AVAudioSession framework libraries. I need to find a way to work around the issue or get the library fixed.