With a 4 channel audio input, I get error while recording a movie using AVAssetWriter and using LPCM codec. Are 4 audio channels not supported by AVAssetWriterInput? Here are my compression settings:
var aclSize:size_t = 0
var currentChannelLayout:UnsafePointer<AudioChannelLayout>? = nil
/*
* outputAudioFormat = CMSampleBufferGetFormatDescription(sampleBuffer)
* for the latest sample buffer received in captureOutput sampleBufferDelegate
*/
if let outputFormat = outputAudioFormat {
currentChannelLayout = CMAudioFormatDescriptionGetChannelLayout(outputFormat, sizeOut: &aclSize)
}
var currentChannelLayoutData:Data = Data()
if let currentChannelLayout = currentChannelLayout, aclSize > 0 {
currentChannelLayoutData = Data.init(bytes: currentChannelLayout, count: aclSize)
}
let numChannels = AVAudioSession.sharedInstance().inputNumberOfChannels
audioSettings[AVSampleRateKey] = 48000.0
audioSettings[AVFormatIDKey] = kAudioFormatLinearPCM
audioSettings[AVLinearPCMIsBigEndianKey] = false
audioSettings[AVLinearPCMBitDepthKey] = 16
audioSettings[AVNumberOfChannelsKey] = numChannels
audioSettings[AVLinearPCMIsFloatKey] = false
audioSettings[AVLinearPCMIsNonInterleaved] = false
audioSettings[AVChannelLayoutKey] = currentChannelLayoutData
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
It's very hard to know from AVFoundation error codes what is the exact error. For instance, I see the following message and don't know what error code means?
We are getting this error on iOS 16.1 beta 5 that we never saw before in any of the iOS versions.
[as_client] AVAudioSession_iOS.mm:2374 Failed to set category, error: 'what'
I wonder if there is any known workaround for the same. iOS 16 has been a nightmare and lot of AVFoundation code breaks or becomes unpredictable in behaviour. This is a new issue added in iOS 16.1.
AVFoundation has serious issued in iOS 16.1 beta 5. None of these issues are seen prior to iOS 16.0 or earlier.
The following code fails regularly when switching between AVCaptureMultiCamSession & AVCaptureSession. It turns out that assetWriter.canApply(outputSettings:) condition is false for no apparent reason.
if assetWriter?.canApply(outputSettings: audioSettings!, forMediaType: AVMediaType.audio) ?? false {
}
I dumped audioSettings dictionary and here it is:
Looks like number of channels in AVAudioSession are 3 and that is the issue. But how did that happen? Probably there is a bug and AVCaptureMultiCamSession teardown and deallocation is causing some issue.
Using AVAssetWriter in AVCaptureMultiCamSession, many times no audio is recorded in the video under same audio settings dictionary dumped above. There is audio track in the video but everything is silent it seems. The same code works perfectly in all other iOS versions. I checked that audio sample buffers are indeed vended during recording but it's very likely they are silent buffers.
Is anyone aware of these issues?
This seems like a new bug in iOS 16.1(b5) where AVMultiCamSession outputs silent audio frames when back & front mics have been added to it. This issue is not seen in iOS 16.0.3 or earlier. I can't reproduce this issue with AVMultiCamPIP sample code so I believe I have some AVAudioSession or AVMultiCamSession configuration in my code that is causing this. Moreover, setting captureSession.usesApplicationAudioSession = true also fixes the issue, but then I do not get the audio samples from both the microphones.
Here is the code:
public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)
{
if let videoDataOutput = output as? AVCaptureVideoDataOutput {
processVideoSampleBuffer(sampleBuffer, fromOutput: videoDataOutput)
} else if let audioDataOutput = output as? AVCaptureAudioDataOutput {
processsAudioSampleBuffer(sampleBuffer, fromOutput: audioDataOutput)
}
}
private var lastDumpTime:TimeInterval?
private func processsAudioSampleBuffer(_ sampleBuffer: CMSampleBuffer, fromOutput audioDataOutput: AVCaptureAudioDataOutput) {
if lastDumpTime == nil {
lastDumpTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer).seconds
}
let time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer).seconds
if time - lastDumpTime! >= 1.0 {
dumpAudioSampleBuffer(sampleBuffer)
lastDumpTime = time
}
}
}
private func dumpAudioSampleBuffer(_ sampleBuffer:CMSampleBuffer) {
NSLog("Dumping audio sample buffer")
var audioBufferList = AudioBufferList(mNumberBuffers: 1,
mBuffers: AudioBuffer(mNumberChannels: 0, mDataByteSize: 0, mData: nil))
var buffer: CMBlockBuffer? = nil
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &audioBufferList, bufferListSize: MemoryLayout.size(ofValue: audioBufferList), blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment), blockBufferOut: &buffer)
// Create UnsafeBufferPointer from the variable length array starting at audioBufferList.mBuffers
withUnsafePointer(to: &audioBufferList.mBuffers) { ptr in
let buffers = UnsafeBufferPointer<AudioBuffer>(start: ptr, count: Int(audioBufferList.mNumberBuffers))
for buf in buffers {
// Create UnsafeBufferPointer<Int16> from the buffer data pointer
let numSamples = Int(buf.mDataByteSize)/MemoryLayout<Int16>.stride
var samples = buf.mData!.bindMemory(to: Int16.self, capacity: numSamples)
for i in 0..<numSamples {
NSLog("Sample \(samples[i])")
}
}
}
}
And here is the output:
Dump Audio Samples
I have the following audio compression settings which fail with AVAssetWriter (mov container, HEVC codec, kAudioFormatMPEG4AAC format ID):
["AVSampleRateKey": 48000, "AVFormatIDKey": 1633772320, "AVNumberOfChannelsKey": 1, "AVEncoderBitRatePerChannelKey": 128000, "AVChannelLayoutKey": <02006500 00000000 00000000 00000000 00000000 00000000 00000000 00000000>]
Here is the code line that fails:
if _assetWriter?.canApply(outputSettings: audioSettings!, forMediaType: AVMediaType.audio) ?? false {
} else {
/* Failure */
}
Want to understand what is wrong? I can not reproduce it at my end (only reproducible on user's device with a particular microphone).
I need to know if it is mandatory to provide value for AVChannelLayoutKey in the dictionary with kAudioFormatMPEG4AAC? That could be a possible culprit.
I want to know under what conditions can -[AVCaptureSession synchronizationClock] be nil? Some of my app users on iOS 16.1 are hitting this error (synchronizationClock == nil) which is not reproducible on my side.
I see 24 hours report completely broken and unreliable in AppStoreConnect Sales & Trends for more than a month. When I submit a complain, all they say is clear cookies and caches and restart the browser. Either the complain is not forwarded to engineering or engineering is not acknowledging the issue.
I have the following code to determine ProRes and HDR support on iOS devices.
extension AVCaptureDevice.Format {
var supports10bitHDR:Bool {
let mediaType = CMFormatDescriptionGetMediaType(formatDescription)
let mediaSubtype = CMFormatDescriptionGetMediaSubType(formatDescription)
return mediaType == kCMMediaType_Video && mediaSubtype == kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange
}
var supportsProRes422:Bool {
let mediaType = CMFormatDescriptionGetMediaType(formatDescription)
let mediaSubtype = CMFormatDescriptionGetMediaSubType(formatDescription)
return (mediaType == kCMMediaType_Video && (mediaSubtype == kCVPixelFormatType_422YpCbCr10BiPlanarVideoRange))
}
}
On iPad Pro M2, supportsProRes422 returns false for all device formats (for wide angle camera). Is it a bug or intentional?
I have an AVAssetWriter and I do set audio compression settings dictionary using canApply(outputSettings: audioCompressionSettings, forMediaType: .audio) API.
One of the fields in the compression settings is setting an audio sample rate using AVSampleRateKey. My question is if the sample rate I set in this key is different from sample rate of audio sample buffers that are appended, can this cause audio to drift away from video? Is setting arbitrary sample rate in asset writer settings not recommended?
I have an XCode project that has multiple targets. In one of the targets, I want to add Apple Watch support (not in other targets). How do I do that?
I am getting CMSampleBuffers in kCVPixelFormatType_422YpCbCr10BiPlanarVideoRange format from the camera. The pixel buffers are 10 bit HDR. I need to record using ProRes422 codec but in non-HDR format. I am not sure what is a reliable way of doing this so reaching out here. What I did is simply set AVAssetWriter compression dictionary as follows:
compressionSettings[AVVideoTransferFunctionKey] = AVVideoTransferFunction_ITU_R_709_2
compressionSettings[AVVideoColorPrimariesKey] = AVVideoColorPrimaries_ITU_R_709_2
compressionSettings[AVVideoYCbCrMatrixKey] = AVVideoYCbCrMatrix_ITU_R_709_2
It works and the end video recording shows the color space to be HD 1-1-1 with Apple ProRes codec. But I am not sure if AVAssetWriter has actually performed colorspace conversion from HDR 10 to BT.709, or simply clipped the colors out of range.
I need to know a definitive way to achieve this. I see Apple's native camera app is doing this, but not sure how.
I understand the CVBufferSetAttachment simply appends metadata attachment to the sample buffer in a dictionary. But I see there are no errors in appending metadata that is contradictory in nature. For instance, for the sample buffers received from camera in HDR mode which are in YUV422 10 bit biplanar format, both the following succeed:
CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_2020, .shouldPropagate)
CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_2100_HLG, .shouldPropagate)
CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_2020, .shouldPropagate)
Or
CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_709_2, .shouldPropagate)
CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_709_2, .shouldPropagate)
CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_709_2, .shouldPropagate)
So one could set the color primaries and transfer function to be BT.709 format for sample buffers that are in 10 bit HDR. I see no errors when either sample buffer is appended to AVAssetWriter. I am wondering how attachments actually work and how AVFoundation resolves the contradictions?
Dear AVKit Engineers,
I see a strange bug in AVKit. Even after the viewcontroller hosting AVPlayerViewController is dismissed, I see CPU spiking to over 100% which is caused by some animation code still running after AVPlayerViewController no more exists ([AVMobileChromelessControlsViewController__animateSliderToTintState:duration:completionHandler:]).
How does this code continue to run even after AVPlayerViewController is no more? And what can I do to fix it?
I am planning to convert a paid app to freemium. I would like existing paid users to remain unaffected in this process. In this question, I am focussing on volume purchase users (both existing and future). The info on Apple developer website advises to use original Store Kit if one needs to support Volume Purchase users:
You may need to use the Original API for in-app purchase for the following features, if your app supports them:
The Volume Purchase Program (VPP). For more information, see Device Management.
Does that mean I can't use StoreKit2 to verify receipts of volume purchases made before the app went freemium (to get original purchase version and date), OR, the API can not be used to make in-app volume purchases and perhaps, users will not be able to make volume purchases from AppStore, OR, both?