I am wondering if new AVCam sample code was tested before release. It hangs on startup on iPhone 14 pro running iOS 26 beta with the following logs on console:
<<<< FigAudioSession(AV) >>>> audioSessionAVAudioSession_CopyMXSessionProperty signalled err=-19224 (kFigAudioSessionError_UnsupportedOperation) (getMXSessionProperty unsupported) at FigAudioSession_AVAudioSession.m:606
<<<< FigAudioSession(AV) >>>> audioSessionAVAudioSession_CopyMXSessionProperty signalled err=-19224 (kFigAudioSessionError_UnsupportedOperation) (getMXSessionProperty unsupported) at FigAudioSession_AVAudioSession.m:606
App is being debugged, do not track this hang
Hang detected: 8.04s (debugger attached, not reporting)```
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have doubts about Core Image coordinate system, way transforms are applied and way the image extent is determined. I couldn't find much in documentation or on internet so I tried the following code to rotate CIImage and display it in UIImageView. As I understand there is no absolute coordinate system in Core Image. The bottom left corner of an image is supposed to be (0,0). But my experiments show something else.
I created a prototype to rotate a CIImage by pi/10 radians on each button click. Here is the code I wrote.
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
imageView.contentMode = .scaleAspectFit
let uiImage = UIImage(contentsOfFile: imagePath)
ciImage = CIImage(cgImage: (uiImage?.cgImage)!)
imageView.image = uiImage
}
private var currentAngle = CGFloat(0)
private var ciImage:CIImage!
private var ciContext = CIContext()
@IBAction func rotateImage() {
let extent = ciImage.extent
let translate = CGAffineTransform(translationX: extent.midX, y: extent.midY)
let uiImage = UIImage(contentsOfFile: imagePath)
currentAngle = currentAngle + CGFloat.pi/10
let rotate = CGAffineTransform(rotationAngle: currentAngle)
let translateBack = CGAffineTransform(translationX: -extent.midX, y: -extent.midY)
let transform = translateBack.concatenating(rotate.concatenating(translate))
ciImage = CIImage(cgImage: (uiImage?.cgImage)!)
ciImage = ciImage.transformed(by: transform)
NSLog("Extent \(ciImage.extent), Angle \(currentAngle)")
let cgImage = ciContext.createCGImage(ciImage, from: ciImage.extent)
imageView.image = UIImage(cgImage: cgImage!)
}
But in the logs, I see the extent of images have negative origin.x and origin.y. What does it mean? Relative to whom it is negative and where exactly is (0,0) then? What exactly is image extent and how does Core Image coordinate system work?
2021-09-24 14:43:29.280393+0400 CoreImagePrototypes[65817:5175194] Metal API Validation Enabled
2021-09-24 14:43:31.094877+0400 CoreImagePrototypes[65817:5175194] Extent (-105.0, -105.0, 1010.0, 1010.0), Angle 0.3141592653589793
2021-09-24 14:43:41.426371+0400 CoreImagePrototypes[65817:5175194] Extent (-159.0, -159.0, 1118.0, 1118.0), Angle 0.6283185307179586
2021-09-24 14:43:42.244703+0400 CoreImagePrototypes[65817:5175194] Extent (-159.0, -159.0, 1118.0, 1118.0), Angle 0.9424777960769379
In the WWDC 2021 video 10047, it was mentioned to look for availability of Lossless CVPixelBuffer format and fallback to normal BGRA32 format if it is not available. But in the updated AVMultiCamPiP sample code, it first looks for Lossy format than the lossless. Why is it so and whats the exact difference it would make if we select lossy vs lossless?
Is it possible to pass MTLTexture to Metal Core Image Kernel? How can Metal resources be shared with Core Image?
I wrote the following Metal Core Image Kernel to produce constant red color.
extern "C" float4 redKernel(coreimage::sampler inputImage, coreimage::destination dest)
{
return float4(1.0, 0.0, 0.0, 1.0);
}
And then I have this in Swift code:
class CIMetalRedColorKernel: CIFilter {
var inputImage:CIImage?
static var kernel:CIKernel = { () -> CIKernel in
let bundle = Bundle.main
let url = bundle.url(forResource: "Kernels", withExtension: "ci.metallib")!
let data = try! Data(contentsOf: url)
return try! CIKernel(functionName: "redKernel", fromMetalLibraryData: data)
}()
override var outputImage: CIImage? {
guard let inputImage = inputImage else {
return nil
}
let dod = inputImage.extent
return CIMetalTestRenderer.kernel.apply(extent: dod, roiCallback: { index, rect in
return rect
}, arguments: [inputImage])
}
}
As you can see, the dod is given to be the extent of the input image. But when I run the filter, I get a whole red image beyond the extent of the input image (DOD), why? I have multiple filters chained together and the overall size is 1920x1080. Isn't the red filter supposed to run only for DOD rectangle passed in it and produce clear pixels for anything outside the DOD?
I see there is a deprecation warning when using detailTextLabel of UITableViewCell.
@available(iOS, introduced: 3.0, deprecated: 100000, message: "Use UIListContentConfiguration instead, this property will be deprecated in a future release.")
open var detailTextLabel: UILabel? { get } // default is nil. label will be created if necessary (and the current style supports a detail label).
But it is not clear how to use UIListContentConfiguration to support detailTextLabel that is on the right side of the cell. I only see secondaryText in UIListContentConfiguration that is always displayed as a subtitle. How does one use UIListContentConfiguration as a replacement?
I am unable to understand the sentence "Default is YES for all apps linked on or after iOS 15.4"found in Apple's documentation. What does it actually mean? Does it mean that the default value is YES for app running on iOS 15.4 or later? Or does it mean it is yes if the app is built with iOS 15.4 SDK or later? Or is it something else?
I have CVPixelBuffers in YCbCr (422 or 420 biplanar 10 bit video range) coming from camera. I see vImage framework is sophisticated enough to handle a variety of image formats (including pixel buffers in various YCbCr formats). I was looking to compute histograms for both Y (luma) and RGB. For the 8 bit YCbCr samples, I could use this code to compute histogram of Y component.
CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly)
let bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0)
let baseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0)
let height = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0)
let width = CVPixelBufferGetWidthOfPlane(pixelBuffer, 0)
CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly)
var buffer = vImage_Buffer(data: baseAddress, height: vImagePixelCount( height), width: vImagePixelCount(width), rowBytes: bytesPerRow)
alphaBin.withUnsafeMutableBufferPointer { alphaPtr in
let error = vImageHistogramCalculation_Planar8(&buffer, alphaPtr.baseAddress!, UInt32(kvImageNoFlags))
guard error == kvImageNoError else {
fatalError("Error calculating histogram luma: \(error)")
}
}
How does one implement the same for 10 bit HDR pixel buffers, preferably using new iOS 16 vImage APIs that provide lot more flexibility (for instance, getting RGB histogram as well from YCbCr sample without explicitly performing pixel format conversion)?
It looks like [[stitchable]] Metal Core Image kernels fail to get added in the default metal library. Here is my code:
class FilterTwo: CIFilter {
var inputImage: CIImage?
var inputParam: Float = 0.0
static var kernel: CIKernel = { () -> CIKernel in
let url = Bundle.main.url(forResource: "default",
withExtension: "metallib")!
let data = try! Data(contentsOf: url)
let kernelNames = CIKernel.kernelNames(fromMetalLibraryData: data)
NSLog("Kernels \(kernelNames)")
return try! CIKernel(functionName: "secondFilter", fromMetalLibraryData: data) //<-- This fails!
}()
override var outputImage : CIImage? {
guard let inputImage = inputImage else {
return nil
}
return FilterTwo.kernel.apply(extent: inputImage.extent, roiCallback: { (index, rect) in
return rect }, arguments: [inputImage])
}
}
Here is the Metal code:
using namespace metal;
[[ stitchable ]] half4 secondFilter (coreimage::sampler inputImage, coreimage::destination dest)
{
float2 srcCoord = inputImage.coord();
half4 color = half4(inputImage.sample(srcCoord));
return color;
}
And here is the usage:
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
let filter = FilterTwo()
filter.inputImage = CIImage(color: CIColor.red)
let outputImage = filter.outputImage!
NSLog("Output \(outputImage)")
}
}
And the output:
StitchableKernelsTesting/FilterTwo.swift:15: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=CIKernel Code=1 "(null)" UserInfo={CINonLocalizedDescriptionKey=Function does not exist in library data. …•∆}
Kernels []
reflect Function 'secondFilter' does not exist.
Adding multiple AVCaptureVideoDataOutput is officially supported in iOS 16 and works well, except for certain configurations such as ProRes (YCbCr422 pixel format) where session fails to start if two VDO outputs are added. Is this a known limitation or a bug? Here is the code:
device.activeFormat = device.findFormat(targetFPS, resolution: targetResolution, pixelFormat: kCVPixelFormatType_422YpCbCr10BiPlanarVideoRange)!
NSLog("Device supports tone mapping \(device.activeFormat.isGlobalToneMappingSupported)")
device.activeColorSpace = .HLG_BT2020
device.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(targetFPS))
device.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(targetFPS))
device.unlockForConfiguration()
self.session?.addInput(input)
let output = AVCaptureVideoDataOutput()
output.alwaysDiscardsLateVideoFrames = true
output.setSampleBufferDelegate(self, queue: self.samplesQueue)
if self.session!.canAddOutput(output) {
self.session?.addOutput(output)
}
let previewVideoOut = AVCaptureVideoDataOutput()
previewVideoOut.alwaysDiscardsLateVideoFrames = true
previewVideoOut.automaticallyConfiguresOutputBufferDimensions = false
previewVideoOut.deliversPreviewSizedOutputBuffers = true
previewVideoOut.setSampleBufferDelegate(self, queue: self.previewQueue)
if self.session!.canAddOutput(previewVideoOut) {
self.session?.addOutput(previewVideoOut)
}
self.vdo = vdo
self.previewVDO = previewVideoOut
self.session?.startRunning()
It works for other formats such as 10-bit YCbCr video range HDR sample buffers, but there are lot of frame drops when recording with AVAssetWriter at 4K@60 fps. Are these known limitations or bad use of API?
XCode 16 seems to have an issue with stitchable kernels in Core Image which gives build errors as stated in this question. As a workaround, I selected Metal 3.2 as Metal Language Revision in XCode project. It works on newer devices like iPhone 13 pro and above but metal texture creation fails on older devices like iPhone 11 pro. Is this a known issue and is there a workaround? I tried selecting Metal language revision to 2.4 but the same build errors occur as reported in this question. Here is the code where assertion failure happens on iPhone 11 Pro.
let vertexShader = library.makeFunction(name: "vertexShaderPassthru")
let fragmentShaderYUV = library.makeFunction(name: "fragmentShaderYUV")
let pipelineDescriptorYUV = MTLRenderPipelineDescriptor()
pipelineDescriptorYUV.rasterSampleCount = 1
pipelineDescriptorYUV.colorAttachments[0].pixelFormat = .bgra8Unorm
pipelineDescriptorYUV.depthAttachmentPixelFormat = .invalid
pipelineDescriptorYUV.vertexFunction = vertexShader
pipelineDescriptorYUV.fragmentFunction = fragmentShaderYUV
do {
try pipelineStateYUV = metalDevice?.makeRenderPipelineState(descriptor: pipelineDescriptorYUV)
}
catch {
assertionFailure("Failed creating a render state pipeline. Can't render the texture without one.")
return
}
I have the following two views in SwiftUI. The first view GestureTestView has a drag gesture defined on its overlay view (call it indicator view) and has the subview called ContentTestView that has tap gesture attached to it. The problem is tap gesture in ContentTestView is blocking Drag Gesture on indicator view. I have tried everything including simultaneous gestures but it doesn't seem to work as gestures are on different views. It's easy to test by simply copying and pasting the code and running the code in XCode preview.
import SwiftUI
struct GestureTestView: View {
@State var indicatorOffset:CGFloat = 10.0
var body: some View {
ContentTestView()
.overlay(alignment: .leading, content: {
Capsule()
.fill(Color.mint.gradient)
.frame(width: 8, height: 60)
.offset(x: indicatorOffset )
.gesture(
DragGesture(minimumDistance: 0)
.onChanged({ value in
indicatorOffset = min(max(0, 10 + value.translation.width), 340)
})
.onEnded { value in
}
)
})
}
}
#Preview {
GestureTestView()
}
struct ContentTestView: View {
@State var isSelected = false
var body: some View {
HStack(spacing:0) {
ForEach(0..<8) { index in
Rectangle()
.fill(index % 2 == 0 ? Color.blue : Color.red)
.frame(width:40, height:40)
}
.overlay {
if isSelected {
RoundedRectangle(cornerRadius: 5)
.stroke(.yellow, lineWidth: 3.0)
}
}
}
.onTapGesture {
isSelected.toggle()
}
}
}
#Preview {
ContentTestView()
}
I see in most of the old sample codes from Apple that when using AVAssetWriter to append audio, video, and metadata samples in a real time camera recording setup, calls to .append(sampleBuffer) are either synchronised using an NSLock or all the samples are sent to the asset writer on the same dispatch queue thereby preventing concurrent writes. However I can't find any documentation that calls to assetWriterInput.append(sampleBuffer) for different media samples such as Audio and Video should not be done concurrently. Is it not valid for these methods to be executed in parallel for instance?
`videoSamplesAssetWriterInput.append(videoSampleBuffer)` from DispatchQueue 1
`audioSamplesAssetWriterInput.append(audioSampleBuffer)` from DispatchQueue 2
The AVCam sample code by Apple fails to build in Swift 6 language settings due to failed concurrency checks ((the only modification to make in that code is to append @preconcurrency to import AVFoundation).
Here is a minimally reproducible sample code for one of the errors:
import Foundation
final class Recorder {
var writer = Writer()
var isRecording = false
func startRecording() {
Task { [writer] in
await writer.startRecording()
print("started recording")
}
}
func stopRecording() {
Task { [writer] in
await writer.stopRecording()
print("stopped recording")
}
}
func observeValues() {
Task {
for await value in await writer.$isRecording.values {
isRecording = value
}
}
}
}
actor Writer {
@Published private(set) public var isRecording = false
func startRecording() {
isRecording = true
}
func stopRecording() {
isRecording = false
}
}
The function observeValues gives an error:
Non-sendable type 'Published<Bool>.Publisher' in implicitly asynchronous access to actor-isolated property '$isRecording' cannot cross actor boundary
I tried everything to fix it but all in vain. Can someone please point out if the architecture of AVCam sample code is flawed or there is an easy fix?
The documentation for AVCaptureDeviceRotationCoordinator says it is designed to work with any CALayer but it seems like it is designed to work only with AVCaptureVideoPreviewLayer. Can someone confirm it is possible to make it work with other layers such as CAMetalLayer or AVSampleBufferDisplayLayer?