Post

Replies

Boosts

Views

Created

iOS 16.1 beta 5 AVAudioSession Error
We are getting this error on iOS 16.1 beta 5 that we never saw before in any of the iOS versions. [as_client]     AVAudioSession_iOS.mm:2374  Failed to set category, error: 'what' I wonder if there is any known workaround for the same. iOS 16 has been a nightmare and lot of AVFoundation code breaks or becomes unpredictable in behaviour. This is a new issue added in iOS 16.1.
2
0
2.4k
Oct ’22
AVFoundation serious bugs in iOS 16.1 beta 5
AVFoundation has serious issued in iOS 16.1 beta 5. None of these issues are seen prior to iOS 16.0 or earlier. The following code fails regularly when switching between AVCaptureMultiCamSession & AVCaptureSession. It turns out that assetWriter.canApply(outputSettings:) condition is false for no apparent reason.         if assetWriter?.canApply(outputSettings: audioSettings!, forMediaType: AVMediaType.audio) ?? false { }           I dumped audioSettings dictionary and here it is: Looks like number of channels in AVAudioSession are 3 and that is the issue. But how did that happen? Probably there is a bug and AVCaptureMultiCamSession teardown and deallocation is causing some issue. Using AVAssetWriter in AVCaptureMultiCamSession, many times no audio is recorded in the video under same audio settings dictionary dumped above. There is audio track in the video but everything is silent it seems. The same code works perfectly in all other iOS versions. I checked that audio sample buffers are indeed vended during recording but it's very likely they are silent buffers. Is anyone aware of these issues?
1
0
886
Oct ’22
iOS 16.1(b5) - AVMultiCamSession emitting silent audio frames
This seems like a new bug in iOS 16.1(b5) where AVMultiCamSession outputs silent audio frames when back & front mics have been added to it. This issue is not seen in iOS 16.0.3 or earlier. I can't reproduce this issue with AVMultiCamPIP sample code so I believe I have some AVAudioSession or AVMultiCamSession configuration in my code that is causing this. Moreover, setting captureSession.usesApplicationAudioSession = true also fixes the issue, but then I do not get the audio samples from both the microphones. Here is the code:     public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)     {         if let videoDataOutput = output as? AVCaptureVideoDataOutput {             processVideoSampleBuffer(sampleBuffer, fromOutput: videoDataOutput)         } else if let audioDataOutput = output as? AVCaptureAudioDataOutput {             processsAudioSampleBuffer(sampleBuffer, fromOutput: audioDataOutput)         }     }  private var lastDumpTime:TimeInterval?     private func processsAudioSampleBuffer(_ sampleBuffer: CMSampleBuffer, fromOutput audioDataOutput: AVCaptureAudioDataOutput) {         if lastDumpTime == nil {             lastDumpTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer).seconds         }         let time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer).seconds         if time - lastDumpTime! >= 1.0 {             dumpAudioSampleBuffer(sampleBuffer)             lastDumpTime = time         }       }  }  private func dumpAudioSampleBuffer(_ sampleBuffer:CMSampleBuffer) {         NSLog("Dumping audio sample buffer")         var audioBufferList = AudioBufferList(mNumberBuffers: 1,               mBuffers: AudioBuffer(mNumberChannels: 0, mDataByteSize: 0, mData: nil))         var buffer: CMBlockBuffer? = nil         CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &audioBufferList, bufferListSize: MemoryLayout.size(ofValue: audioBufferList), blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment), blockBufferOut: &buffer)         // Create UnsafeBufferPointer from the variable length array starting at audioBufferList.mBuffers         withUnsafePointer(to: &audioBufferList.mBuffers) { ptr in             let buffers = UnsafeBufferPointer<AudioBuffer>(start: ptr, count: Int(audioBufferList.mNumberBuffers))             for buf in buffers {                 // Create UnsafeBufferPointer<Int16> from the buffer data pointer                 let numSamples = Int(buf.mDataByteSize)/MemoryLayout<Int16>.stride                 var samples = buf.mData!.bindMemory(to: Int16.self, capacity: numSamples)                 for i in 0..<numSamples {                     NSLog("Sample \(samples[i])")                 }             }         }     } And here is the output: Dump Audio Samples
2
0
863
Oct ’22
iOS 16.1 CDPurgeableResultCache logs
I keep getting repeated logs on iOS 16.1: [client] 114 CDPurgeableResultCache _recentPurgeableTotals no result for /private/var/wireless/baseband_data, setting to zero The logs fill the console and it's difficult to find useful logs that are there in my code. What can be done to disable these logs?
0
3
1.5k
Oct ’22
AVAssetWriter audio settings failure with compression settings
I have the following audio compression settings which fail with AVAssetWriter (mov container, HEVC codec, kAudioFormatMPEG4AAC format ID): ["AVSampleRateKey": 48000, "AVFormatIDKey": 1633772320, "AVNumberOfChannelsKey": 1, "AVEncoderBitRatePerChannelKey": 128000, "AVChannelLayoutKey": <02006500 00000000 00000000 00000000 00000000 00000000 00000000 00000000>] Here is the code line that fails: if _assetWriter?.canApply(outputSettings: audioSettings!, forMediaType: AVMediaType.audio) ?? false { } else { /* Failure */ } Want to understand what is wrong? I can not reproduce it at my end (only reproducible on user's device with a particular microphone). I need to know if it is mandatory to provide value for AVChannelLayoutKey in the dictionary with kAudioFormatMPEG4AAC? That could be a possible culprit.
1
0
1.1k
Oct ’22
'User Assigned Device Name' in XCode Capabilities list
I got approval for User Assigned Device Name but now am stuck how to import this capability in XCode? I successfully updated my AppId in membership portal to reflect this permission, but I can't find anything in XCode to import this. I opened Editor -&gt; Add Capability but there is no option to add User Assigned Device Name in the list! User Assigned Device Name
17
1
6.8k
Nov ’22
iPadPro M2 ProRes unavailable
I have the following code to determine ProRes and HDR support on iOS devices. extension AVCaptureDevice.Format {     var supports10bitHDR:Bool {         let mediaType = CMFormatDescriptionGetMediaType(formatDescription)         let mediaSubtype = CMFormatDescriptionGetMediaSubType(formatDescription)         return mediaType == kCMMediaType_Video && mediaSubtype == kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange     }     var supportsProRes422:Bool {         let mediaType = CMFormatDescriptionGetMediaType(formatDescription)         let mediaSubtype = CMFormatDescriptionGetMediaSubType(formatDescription)         return (mediaType == kCMMediaType_Video && (mediaSubtype == kCVPixelFormatType_422YpCbCr10BiPlanarVideoRange))     } } On iPad Pro M2, supportsProRes422 returns false for all device formats (for wide angle camera). Is it a bug or intentional?
1
0
1.1k
Dec ’22
AVAssetWriter sample rate AV drift
I have an AVAssetWriter and I do set audio compression settings dictionary using canApply(outputSettings: audioCompressionSettings, forMediaType: .audio) API. One of the fields in the compression settings is setting an audio sample rate using AVSampleRateKey. My question is if the sample rate I set in this key is different from sample rate of audio sample buffers that are appended, can this cause audio to drift away from video? Is setting arbitrary sample rate in asset writer settings not recommended?
1
0
942
Dec ’22
CVPixelBuffer HDR10 to BT.709 conversion using AVAssetWriter
I am getting CMSampleBuffers in kCVPixelFormatType_422YpCbCr10BiPlanarVideoRange format from the camera. The pixel buffers are 10 bit HDR. I need to record using ProRes422 codec but in non-HDR format. I am not sure what is a reliable way of doing this so reaching out here. What I did is simply set AVAssetWriter compression dictionary as follows: compressionSettings[AVVideoTransferFunctionKey] = AVVideoTransferFunction_ITU_R_709_2 compressionSettings[AVVideoColorPrimariesKey] = AVVideoColorPrimaries_ITU_R_709_2 compressionSettings[AVVideoYCbCrMatrixKey] = AVVideoYCbCrMatrix_ITU_R_709_2 It works and the end video recording shows the color space to be HD 1-1-1 with Apple ProRes codec. But I am not sure if AVAssetWriter has actually performed colorspace conversion from HDR 10 to BT.709, or simply clipped the colors out of range. I need to know a definitive way to achieve this. I see Apple's native camera app is doing this, but not sure how.
0
0
724
Mar ’23
What exactly does CVBufferSetAttachment do?
I understand the CVBufferSetAttachment simply appends metadata attachment to the sample buffer in a dictionary. But I see there are no errors in appending metadata that is contradictory in nature. For instance, for the sample buffers received from camera in HDR mode which are in YUV422 10 bit biplanar format, both the following succeed: CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_2020, .shouldPropagate) CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_2100_HLG, .shouldPropagate) CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_2020, .shouldPropagate) Or CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_709_2, .shouldPropagate) CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_709_2, .shouldPropagate) CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_709_2, .shouldPropagate) So one could set the color primaries and transfer function to be BT.709 format for sample buffers that are in 10 bit HDR. I see no errors when either sample buffer is appended to AVAssetWriter. I am wondering how attachments actually work and how AVFoundation resolves the contradictions?
1
0
633
Mar ’23
ShareSheet errors on console when presenting UIActivityViewController
I see these errors when presenting UIActivityViewController with a video file. [ShareSheet] Failed to request default share mode for fileURL:file:///var/mobile/Containers/Data/Application/B0EB55D3-4BF1-430A-92D8-2231AFFD9499/Documents/IMG-0155.mov error:Error Domain=NSOSStatusErrorDomain Code=-10814 "(null)" UserInfo={_LSLine=1538, _LSFunction=runEvaluator} I don't understand if I am doing something wrong and what the error means. The share sheet shows anyways.
Topic: UI Frameworks SubTopic: UIKit Tags:
5
3
2.4k
Apr ’23
High CPU usage with CoreImage vs Metal
I am processing CVPixelBuffers received from camera using both Metal and CoreImage, and comparing the performance. The only processing that is done is taking a source pixel buffer and applying crop & affine transforms, and saving the result to another pixel buffer. What I do notice is CPU usage is as high a 50% when using CoreImage and only 20% when using Metal. The profiler shows most of the time spent is in CIContext render: let cropRect = AVMakeRect(aspectRatio: CGSize(width: dstWidth, height: dstHeight), insideRect: srcImage.extent) var dstImage = srcImage.cropped(to: cropRect) let translationTransform = CGAffineTransform(translationX: -cropRect.minX, y: -cropRect.minY) var transform = CGAffineTransform.identity transform = transform.concatenating(CGAffineTransform(translationX: -(dstImage.extent.origin.x + dstImage.extent.width/2), y: -(dstImage.extent.origin.y + dstImage.extent.height/2))) transform = transform.concatenating(translationTransform) transform = transform.concatenating(CGAffineTransform(translationX: (dstImage.extent.origin.x + dstImage.extent.width/2), y: (dstImage.extent.origin.y + dstImage.extent.height/2))) dstImage = dstImage.transformed(by: translationTransform) let scale = max(dstWidth/(dstImage.extent.width), CGFloat(dstHeight/dstImage.extent.height)) let scalingTransform = CGAffineTransform(scaleX: scale, y: scale) transform = CGAffineTransform.identity transform = transform.concatenating(scalingTransform) dstImage = dstImage.transformed(by: transform) if flipVertical { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: 1, y: -1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: 0, y: dstImage.extent.size.height)) } if flipHorizontal { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: -1, y: 1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: dstImage.extent.size.width, y: 0)) } var dstBounds = CGRect.zero dstBounds.size = dstImage.extent.size _ciContext.render(dstImage, to: dstPixelBuffer!, bounds: dstImage.extent, colorSpace: srcImage.colorSpace ) Here is how CIContext was created: _ciContext = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!, options: [CIContextOption.cacheIntermediates: false]) I want to know if I am doing anything wrong and what could be done to lower CPU usage in CoreImage?
4
1
2k
Jul ’23