Post

Replies

Boosts

Views

Activity

What exactly is the use of CIKernel DOD?
I wrote the following Metal Core Image Kernel to produce constant red color. extern "C" float4 redKernel(coreimage::sampler inputImage, coreimage::destination dest) {     return float4(1.0, 0.0, 0.0, 1.0); } And then I have this in Swift code: class CIMetalRedColorKernel: CIFilter {     var inputImage:CIImage?     static var kernel:CIKernel = { () -> CIKernel in         let bundle = Bundle.main         let url = bundle.url(forResource: "Kernels", withExtension: "ci.metallib")!         let data = try! Data(contentsOf: url)         return try! CIKernel(functionName: "redKernel", fromMetalLibraryData: data)     }()     override var outputImage: CIImage? {         guard let inputImage = inputImage else {             return nil         }         let dod = inputImage.extent         return CIMetalTestRenderer.kernel.apply(extent: dod, roiCallback: { index, rect in             return rect         }, arguments: [inputImage])     } } As you can see, the dod is given to be the extent of the input image. But when I run the filter, I get a whole red image beyond the extent of the input image (DOD), why? I have multiple filters chained together and the overall size is 1920x1080. Isn't the red filter supposed to run only for DOD rectangle passed in it and produce clear pixels for anything outside the DOD?
4
0
1.6k
Feb ’22
UITableView detailTextLabel equivalent in UIListContentConfiguration
I see there is a deprecation warning when using detailTextLabel of UITableViewCell. @available(iOS, introduced: 3.0, deprecated: 100000, message: "Use UIListContentConfiguration instead, this property will be deprecated in a future release.") open var detailTextLabel: UILabel? { get } // default is nil. label will be created if necessary (and the current style supports a detail label). But it is not clear how to use UIListContentConfiguration to support detailTextLabel that is on the right side of the cell. I only see secondaryText in UIListContentConfiguration that is always displayed as a subtitle. How does one use UIListContentConfiguration as a replacement?
2
0
1.3k
Apr ’22
Compute histogram for 10 bit YCbCr samples
I have CVPixelBuffers in YCbCr (422 or 420 biplanar 10 bit video range) coming from camera. I see vImage framework is sophisticated enough to handle a variety of image formats (including pixel buffers in various YCbCr formats). I was looking to compute histograms for both Y (luma) and RGB. For the 8 bit YCbCr samples, I could use this code to compute histogram of Y component. CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly) let bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0) let baseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0) let height = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0) let width = CVPixelBufferGetWidthOfPlane(pixelBuffer, 0) CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly) var buffer = vImage_Buffer(data: baseAddress, height: vImagePixelCount( height), width: vImagePixelCount(width), rowBytes: bytesPerRow) alphaBin.withUnsafeMutableBufferPointer { alphaPtr in let error = vImageHistogramCalculation_Planar8(&buffer, alphaPtr.baseAddress!, UInt32(kvImageNoFlags)) guard error == kvImageNoError else { fatalError("Error calculating histogram luma: \(error)") } } How does one implement the same for 10 bit HDR pixel buffers, preferably using new iOS 16 vImage APIs that provide lot more flexibility (for instance, getting RGB histogram as well from YCbCr sample without explicitly performing pixel format conversion)?
1
0
953
Oct ’23
[[stitchable]] Metal core image CIKernel fails to load
It looks like [[stitchable]] Metal Core Image kernels fail to get added in the default metal library. Here is my code: class FilterTwo: CIFilter { var inputImage: CIImage? var inputParam: Float = 0.0 static var kernel: CIKernel = { () -> CIKernel in let url = Bundle.main.url(forResource: "default", withExtension: "metallib")! let data = try! Data(contentsOf: url) let kernelNames = CIKernel.kernelNames(fromMetalLibraryData: data) NSLog("Kernels \(kernelNames)") return try! CIKernel(functionName: "secondFilter", fromMetalLibraryData: data) //<-- This fails! }() override var outputImage : CIImage? { guard let inputImage = inputImage else { return nil } return FilterTwo.kernel.apply(extent: inputImage.extent, roiCallback: { (index, rect) in return rect }, arguments: [inputImage]) } } Here is the Metal code: using namespace metal; [[ stitchable ]] half4 secondFilter (coreimage::sampler inputImage, coreimage::destination dest) { float2 srcCoord = inputImage.coord(); half4 color = half4(inputImage.sample(srcCoord)); return color; } And here is the usage: class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. let filter = FilterTwo() filter.inputImage = CIImage(color: CIColor.red) let outputImage = filter.outputImage! NSLog("Output \(outputImage)") } } And the output: StitchableKernelsTesting/FilterTwo.swift:15: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=CIKernel Code=1 "(null)" UserInfo={CINonLocalizedDescriptionKey=Function does not exist in library data. …•∆} Kernels [] reflect Function 'secondFilter' does not exist.
2
0
1.3k
Nov ’23
Adding multiple AVCaptureVideoDataOutput stalls captureSession
Adding multiple AVCaptureVideoDataOutput is officially supported in iOS 16 and works well, except for certain configurations such as ProRes (YCbCr422 pixel format) where session fails to start if two VDO outputs are added. Is this a known limitation or a bug? Here is the code: device.activeFormat = device.findFormat(targetFPS, resolution: targetResolution, pixelFormat: kCVPixelFormatType_422YpCbCr10BiPlanarVideoRange)! NSLog("Device supports tone mapping \(device.activeFormat.isGlobalToneMappingSupported)") device.activeColorSpace = .HLG_BT2020 device.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(targetFPS)) device.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(targetFPS)) device.unlockForConfiguration() self.session?.addInput(input) let output = AVCaptureVideoDataOutput() output.alwaysDiscardsLateVideoFrames = true output.setSampleBufferDelegate(self, queue: self.samplesQueue) if self.session!.canAddOutput(output) { self.session?.addOutput(output) } let previewVideoOut = AVCaptureVideoDataOutput() previewVideoOut.alwaysDiscardsLateVideoFrames = true previewVideoOut.automaticallyConfiguresOutputBufferDimensions = false previewVideoOut.deliversPreviewSizedOutputBuffers = true previewVideoOut.setSampleBufferDelegate(self, queue: self.previewQueue) if self.session!.canAddOutput(previewVideoOut) { self.session?.addOutput(previewVideoOut) } self.vdo = vdo self.previewVDO = previewVideoOut self.session?.startRunning() It works for other formats such as 10-bit YCbCr video range HDR sample buffers, but there are lot of frame drops when recording with AVAssetWriter at 4K@60 fps. Are these known limitations or bad use of API?
1
0
852
Nov ’23
Selecting Metal 3.2 as language causes crash on iPhone 11 Pro (iOS 17.1.1)
XCode 16 seems to have an issue with stitchable kernels in Core Image which gives build errors as stated in this question. As a workaround, I selected Metal 3.2 as Metal Language Revision in XCode project. It works on newer devices like iPhone 13 pro and above but metal texture creation fails on older devices like iPhone 11 pro. Is this a known issue and is there a workaround? I tried selecting Metal language revision to 2.4 but the same build errors occur as reported in this question. Here is the code where assertion failure happens on iPhone 11 Pro. let vertexShader = library.makeFunction(name: "vertexShaderPassthru") let fragmentShaderYUV = library.makeFunction(name: "fragmentShaderYUV") let pipelineDescriptorYUV = MTLRenderPipelineDescriptor() pipelineDescriptorYUV.rasterSampleCount = 1 pipelineDescriptorYUV.colorAttachments[0].pixelFormat = .bgra8Unorm pipelineDescriptorYUV.depthAttachmentPixelFormat = .invalid pipelineDescriptorYUV.vertexFunction = vertexShader pipelineDescriptorYUV.fragmentFunction = fragmentShaderYUV do { try pipelineStateYUV = metalDevice?.makeRenderPipelineState(descriptor: pipelineDescriptorYUV) } catch { assertionFailure("Failed creating a render state pipeline. Can't render the texture without one.") return }
3
0
677
Oct ’24
Tap Gesture on Subview disables drag gesture on super view
I have the following two views in SwiftUI. The first view GestureTestView has a drag gesture defined on its overlay view (call it indicator view) and has the subview called ContentTestView that has tap gesture attached to it. The problem is tap gesture in ContentTestView is blocking Drag Gesture on indicator view. I have tried everything including simultaneous gestures but it doesn't seem to work as gestures are on different views. It's easy to test by simply copying and pasting the code and running the code in XCode preview. import SwiftUI struct GestureTestView: View { @State var indicatorOffset:CGFloat = 10.0 var body: some View { ContentTestView() .overlay(alignment: .leading, content: { Capsule() .fill(Color.mint.gradient) .frame(width: 8, height: 60) .offset(x: indicatorOffset ) .gesture( DragGesture(minimumDistance: 0) .onChanged({ value in indicatorOffset = min(max(0, 10 + value.translation.width), 340) }) .onEnded { value in } ) }) } } #Preview { GestureTestView() } struct ContentTestView: View { @State var isSelected = false var body: some View { HStack(spacing:0) { ForEach(0..<8) { index in Rectangle() .fill(index % 2 == 0 ? Color.blue : Color.red) .frame(width:40, height:40) } .overlay { if isSelected { RoundedRectangle(cornerRadius: 5) .stroke(.yellow, lineWidth: 3.0) } } } .onTapGesture { isSelected.toggle() } } } #Preview { ContentTestView() }
1
0
674
Oct ’24
AVAssetWriter append audio/video streams concurrently in Real time recording setup
I see in most of the old sample codes from Apple that when using AVAssetWriter to append audio, video, and metadata samples in a real time camera recording setup, calls to .append(sampleBuffer) are either synchronised using an NSLock or all the samples are sent to the asset writer on the same dispatch queue thereby preventing concurrent writes. However I can't find any documentation that calls to assetWriterInput.append(sampleBuffer) for different media samples such as Audio and Video should not be done concurrently. Is it not valid for these methods to be executed in parallel for instance? `videoSamplesAssetWriterInput.append(videoSampleBuffer)` from DispatchQueue 1 `audioSamplesAssetWriterInput.append(audioSampleBuffer)` from DispatchQueue 2
1
0
637
Jan ’25
AVCam sample code build errors in Swift 6
The AVCam sample code by Apple fails to build in Swift 6 language settings due to failed concurrency checks ((the only modification to make in that code is to append @preconcurrency to import AVFoundation). Here is a minimally reproducible sample code for one of the errors: import Foundation final class Recorder { var writer = Writer() var isRecording = false func startRecording() { Task { [writer] in await writer.startRecording() print("started recording") } } func stopRecording() { Task { [writer] in await writer.stopRecording() print("stopped recording") } } func observeValues() { Task { for await value in await writer.$isRecording.values { isRecording = value } } } } actor Writer { @Published private(set) public var isRecording = false func startRecording() { isRecording = true } func stopRecording() { isRecording = false } } The function observeValues gives an error: Non-sendable type 'Published<Bool>.Publisher' in implicitly asynchronous access to actor-isolated property '$isRecording' cannot cross actor boundary I tried everything to fix it but all in vain. Can someone please point out if the architecture of AVCam sample code is flawed or there is an easy fix?
3
0
492
Jan ’25
AVCaptureDevice rotationCoordinator modifying CALayer on switching devices
I am trying to use AVCaptureDevice.rotationCoordinator API to observe angles for preview and capture and it seems there is an issue with the API when used with arbitrary CALayer (which is not a AVCaptureVideoPreviewLayer) and switching cameras. Here is my setup. The below function is defined in an actor class called CameraManager that performs setup of rotationCoordinator. func updateRotationCoordinator(_ callback:@escaping @MainActor (CGFloat) -> Void) { guard let device = sessionConfiguration.activeVideoInput?.device, let displayLayer = displayLayer else { return } cancellables.removeAll() rotationCoordinator = AVCaptureDevice.RotationCoordinator(device: device, previewLayer: displayLayer) guard let coordinator = rotationCoordinator else { return } coordinator.publisher(for: \.videoRotationAngleForHorizonLevelPreview) .receive(on: DispatchQueue.main) .sink { degrees in let radians = degrees * .pi / 180 MainActor.assumeIsolated { callback(radians) } } .store(in: &cancellables) } This works the very first time but when I switch cameras and call this function again, it throws a runtime error that view's layer is modified from a non-main thread. This happens at the very line where rotation coordinator is been recreated. It's not clear why initialising rotation coordinator should modify CALayer properties right in it's init method. Modifying properties of a view's layer off the main thread is not allowed: view <MyApp.DisplayLayerView: 0x102ffaf40> with nearest ancestor view controller <_TtGC7SwiftUI19UIHostingControllerGVS_15ModifiedContentVS_7AnyViewVS_12RootModifier__: 0x101f7fb80>; backtrace: ( 0 UIKitCore 0x0000000194a977b4 575E5140-FA6A-37C2-B00B-A4EACEDFDA53 + 22509492 1 UIKitCore 0x000000019358594c 575E5140-FA6A-37C2-B00B-A4EACEDFDA53 + 416076 2 QuartzCore 0x00000001927f5bd8 D8E8E86D-85AC-3C90-B2E1-940235ECAA18 + 43992 3 QuartzCore 0x00000001927f5a4c D8E8E86D-85AC-3C90-B2E1-940235ECAA18 + 43596 4 QuartzCore 0x000000019283a41c D8E8E86D-85AC-3C90-B2E1-940235ECAA18 + 324636 5 QuartzCore 0x000000019283a0a8 D8E8E86D-85AC-3C90-B2E1-940235ECAA18 + 323752 6 AVFCapture 0x00000001af072a18 09192166-E0B6-346C-B1C2-7C95C3EFF7F7 + 420376 7 MyApp.debug.dylib 0x0000000105fa3914 $s10MyApp15CapturePipelineC25updateRotationCoordinatoryyy12CoreGraphics7CGFloatVScMYccF + 972 8 MyApp.debug.dylib 0x00000001063ade40 $s10MyApp11CameraModelC18switchVideoDevicesyyYaFTY3_ + 72 9 MyApp.debug.dylib 0x0000000105fe3cbd $s10MyApp11ContentViewV4bodyQrvg7SwiftUI6VStackVyAE05TupleE0VyAE6HStackVyAIyAE6SpacerV_AE6ButtonVyAE0E0PAEE5frame5width6height9alignmentQr12CoreGraphics7CGFloatVSg_AyE9AlignmentVtFQOyAqEE11scaledToFitQryFQOyAqEE10imageScaleyQrAE5ImageV0Z0OFQOyA3__Qo__Qo__Qo_GtGG_AmKyAIyAKyAIyAqEE7paddingyQrAE4EdgeO3SetV_AYtFQOyAA07CaptureM0V_Qo__AOyAE4TextVGAmKyAIyA9__AqEEArstUQrAY_AYA_tFQOyAM_Qo_A9_tGGtGG_AmqEE10background_AUQrqd___A_tAePRd__lFQOyAqEEArstUQrAY_AYA_tFQOyA21__Qo__AqEEArstUQrAY_AYA_tFQOyAE06_ShapeE0VyAE9RectangleVAE5ColorVG_Qo_Qo_SgtGGtGGyXEfU0_A42_yXEfU_A10_yXEfU_yyScMYccfU_yyYacfU_TQ1_ + 1 10 MyApp.debug.dylib 0x0000000105ff06d9 $s10MyApp11ContentViewV4bodyQrvg7SwiftUI6VStackVyAE05TupleE0VyAE6HStackVyAIyAE6SpacerV_AE6ButtonVyAE0E0PAEE5frame5width6height9alignmentQr12CoreGraphics7CGFloatVSg_AyE9AlignmentVtFQOyAqEE11scaledToFitQryFQOyAqEE10imageScaleyQrAE5ImageV0Z0OFQOyA3__Qo__Qo__Qo_GtGG_AmKyAIyAKyAIyAqEE7paddingyQrAE4EdgeO3SetV_AYtFQOyAA07CaptureM0V_Qo__AOyAE4TextVGAmKyAIyA9__AqEEArstUQrAY_AYA_tFQOyAM_Qo_A9_tGGtGG_AmqEE10background_AUQrqd___A_tAePRd__lFQOyAqEEArstUQrAY_AYA_tFQOyA21__Qo__AqEEArstUQrAY_AYA_tFQOyAE06_ShapeE0VyAE9RectangleVAE5ColorVG_Qo_Qo_SgtGGtGGyXEfU0_A42_yXEfU_A10_yXEfU_yyScMYccfU_yyYacfU_TATQ0_ + 1 11 MyApp.debug.dylib 0x0000000105f9c595 $sxIeAgHr_xs5Error_pIegHrzo_s8SendableRzs5NeverORs_r0_lTRTQ0_ + 1 12 MyApp.debug.dylib 0x0000000105f9fb3d $sxIeAgHr_xs5Error_pIegHrzo_s8SendableRzs5NeverORs_r0_lTRTATQ0_ + 1 13 libswift_Concurrency.dylib 0x000000019c49fe39 E15CC6EE-9354-3CE5-AF91-F641CA8283E0 + 433721 )
2
0
569
Feb ’25
SwiftUI infinite loop issue with @Environment(\.verticalSizeClass)
I see SwiftUI body being repeatedly called in an infinite loop in the presence of Environment variables like horizontalSizeClass or verticalSizeClass. This happens after device is rotated from portrait to landscape and then back to portrait mode. The deinit method of TestPlayerVM is repeatedly called. Minimally reproducible sample code is pasted below. The infinite loop is not seen if I remove size class environment references, OR, if I skip addPlayerObservers call in the TestPlayerVM initialiser. import AVKit import Combine struct InfiniteLoopView: View { @Environment(\.verticalSizeClass) var verticalSizeClass @Environment(\.horizontalSizeClass) var horizontalSizeClass @State private var openPlayer = false @State var playerURL: URL = URL(fileURLWithPath: Bundle.main.path(forResource: "Test_Video", ofType: ".mov")!) var body: some View { PlayerView(playerURL: playerURL) .ignoresSafeArea() } } struct PlayerView: View { @Environment(\.dismiss) var dismiss var playerURL:URL @State var playerVM = TestPlayerVM() var body: some View { VideoPlayer(player: playerVM.player) .ignoresSafeArea() .background { Color.black } .task { let playerItem = AVPlayerItem(url: playerURL) playerVM.playerItem = playerItem } } } @Observable class TestPlayerVM { private(set) public var player: AVPlayer = AVPlayer() var playerItem:AVPlayerItem? { didSet { player.replaceCurrentItem(with: playerItem) } } private var cancellable = Set<AnyCancellable>() init() { addPlayerObservers() } deinit { print("Deinit Video player manager") removeAllObservers() } private func removeAllObservers() { cancellable.removeAll() } private func addPlayerObservers() { player.publisher(for: \.timeControlStatus, options: [.initial, .new]) .receive(on: DispatchQueue.main) .sink { timeControlStatus in print("Player time control status \(timeControlStatus)") } .store(in: &cancellable) } }
2
0
437
Feb ’25