Post

Replies

Boosts

Views

Activity

'User Assigned Device Name' in XCode Capabilities list
I got approval for User Assigned Device Name but now am stuck how to import this capability in XCode? I successfully updated my AppId in membership portal to reflect this permission, but I can't find anything in XCode to import this. I opened Editor -> Add Capability but there is no option to add User Assigned Device Name in the list! User Assigned Device Name
17
1
6.8k
Dec ’22
Stereo Audio API broken on iOS 15
AVAudioSession API for setting stereo orientation on supported devices (iPhone XS and above) is completely broken on iOS 15 betas. Even the 'Stereo Audio Capture' sample code does not work anymore on iOS 15. Even AVCaptureAudioDataOutput fails when setting stereo orientation on AVAudioSession. I am wondering if Apple Engineers are aware of this issue and whether this would be fixed in upcoming betas and most importantly, iOS 15 mainline.
6
0
1.7k
Aug ’21
ShareSheet errors on console when presenting UIActivityViewController
I see these errors when presenting UIActivityViewController with a video file. [ShareSheet] Failed to request default share mode for fileURL:file:///var/mobile/Containers/Data/Application/B0EB55D3-4BF1-430A-92D8-2231AFFD9499/Documents/IMG-0155.mov error:Error Domain=NSOSStatusErrorDomain Code=-10814 "(null)" UserInfo={_LSLine=1538, _LSFunction=runEvaluator} I don't understand if I am doing something wrong and what the error means. The share sheet shows anyways.
Topic: UI Frameworks SubTopic: UIKit Tags:
5
3
2.3k
Aug ’23
What exactly is CIImage extent
I have doubts about Core Image coordinate system, way transforms are applied and way the image extent is determined. I couldn't find much in documentation or on internet so I tried the following code to rotate CIImage and display it in UIImageView. As I understand there is no absolute coordinate system in Core Image. The bottom left corner of an image is supposed to be (0,0). But my experiments show something else. I created a prototype to rotate a CIImage by pi/10 radians on each button click. Here is the code I wrote. override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. imageView.contentMode = .scaleAspectFit let uiImage = UIImage(contentsOfFile: imagePath) ciImage = CIImage(cgImage: (uiImage?.cgImage)!) imageView.image = uiImage } private var currentAngle = CGFloat(0) private var ciImage:CIImage! private var ciContext = CIContext() @IBAction func rotateImage() { let extent = ciImage.extent let translate = CGAffineTransform(translationX: extent.midX, y: extent.midY) let uiImage = UIImage(contentsOfFile: imagePath) currentAngle = currentAngle + CGFloat.pi/10 let rotate = CGAffineTransform(rotationAngle: currentAngle) let translateBack = CGAffineTransform(translationX: -extent.midX, y: -extent.midY) let transform = translateBack.concatenating(rotate.concatenating(translate)) ciImage = CIImage(cgImage: (uiImage?.cgImage)!) ciImage = ciImage.transformed(by: transform) NSLog("Extent \(ciImage.extent), Angle \(currentAngle)") let cgImage = ciContext.createCGImage(ciImage, from: ciImage.extent) imageView.image = UIImage(cgImage: cgImage!) } But in the logs, I see the extent of images have negative origin.x and origin.y. What does it mean? Relative to whom it is negative and where exactly is (0,0) then? What exactly is image extent and how does Core Image coordinate system work? 2021-09-24 14:43:29.280393+0400 CoreImagePrototypes[65817:5175194] Metal API Validation Enabled 2021-09-24 14:43:31.094877+0400 CoreImagePrototypes[65817:5175194] Extent (-105.0, -105.0, 1010.0, 1010.0), Angle 0.3141592653589793 2021-09-24 14:43:41.426371+0400 CoreImagePrototypes[65817:5175194] Extent (-159.0, -159.0, 1118.0, 1118.0), Angle 0.6283185307179586 2021-09-24 14:43:42.244703+0400 CoreImagePrototypes[65817:5175194] Extent (-159.0, -159.0, 1118.0, 1118.0), Angle 0.9424777960769379
4
0
2.5k
Sep ’21
What exactly is the use of CIKernel DOD?
I wrote the following Metal Core Image Kernel to produce constant red color. extern "C" float4 redKernel(coreimage::sampler inputImage, coreimage::destination dest) {     return float4(1.0, 0.0, 0.0, 1.0); } And then I have this in Swift code: class CIMetalRedColorKernel: CIFilter {     var inputImage:CIImage?     static var kernel:CIKernel = { () -> CIKernel in         let bundle = Bundle.main         let url = bundle.url(forResource: "Kernels", withExtension: "ci.metallib")!         let data = try! Data(contentsOf: url)         return try! CIKernel(functionName: "redKernel", fromMetalLibraryData: data)     }()     override var outputImage: CIImage? {         guard let inputImage = inputImage else {             return nil         }         let dod = inputImage.extent         return CIMetalTestRenderer.kernel.apply(extent: dod, roiCallback: { index, rect in             return rect         }, arguments: [inputImage])     } } As you can see, the dod is given to be the extent of the input image. But when I run the filter, I get a whole red image beyond the extent of the input image (DOD), why? I have multiple filters chained together and the overall size is 1920x1080. Isn't the red filter supposed to run only for DOD rectangle passed in it and produce clear pixels for anything outside the DOD?
4
0
1.6k
Feb ’22
iOS 16 Autorotation strange issue
I am seeing a strange issue with autorotation on iOS 16 that is not seen in other iOS versions. And worse, the issue is NOT seen if I connect device to XCode and debug. It is ONLY seen when I directly launch the app on device once it is installed, and that's the reason I am unable to identify any fix. So here is the summary of the issue. I disable autorotation in the app till the camera session starts running. Once camera session starts running, I fire a notification to force autorotation of device to current orientation. var disableAutoRotation: Bool { if !cameraSessionRunning { return true } return false } override var supportedInterfaceOrientations: UIInterfaceOrientationMask { var orientations:UIInterfaceOrientationMask = .landscapeRight if !self.disableAutoRotation { orientations = .all } return orientations } func cameraSessionStartedRunning(_ session:AVCaptureSession?) { DispatchQueue.main.asyncAfter(deadline: .now(), execute: { /* * HELP::: This code does something only when debug directly from XCode, * not when directly launching the app on device!!!! */ cameraSessionRunning = true if #available(iOS 16.0, *) { UIView.performWithoutAnimation { self.setNeedsUpdateOfSupportedInterfaceOrientations() } } else { // Fallback on earlier versions UIViewController.attemptRotationToDeviceOrientation() } self.layoutInterfaceForOrientation(self.windowOrientation) }) }
Topic: UI Frameworks SubTopic: UIKit Tags:
4
2
2.6k
Sep ’22
High CPU usage with CoreImage vs Metal
I am processing CVPixelBuffers received from camera using both Metal and CoreImage, and comparing the performance. The only processing that is done is taking a source pixel buffer and applying crop & affine transforms, and saving the result to another pixel buffer. What I do notice is CPU usage is as high a 50% when using CoreImage and only 20% when using Metal. The profiler shows most of the time spent is in CIContext render: let cropRect = AVMakeRect(aspectRatio: CGSize(width: dstWidth, height: dstHeight), insideRect: srcImage.extent) var dstImage = srcImage.cropped(to: cropRect) let translationTransform = CGAffineTransform(translationX: -cropRect.minX, y: -cropRect.minY) var transform = CGAffineTransform.identity transform = transform.concatenating(CGAffineTransform(translationX: -(dstImage.extent.origin.x + dstImage.extent.width/2), y: -(dstImage.extent.origin.y + dstImage.extent.height/2))) transform = transform.concatenating(translationTransform) transform = transform.concatenating(CGAffineTransform(translationX: (dstImage.extent.origin.x + dstImage.extent.width/2), y: (dstImage.extent.origin.y + dstImage.extent.height/2))) dstImage = dstImage.transformed(by: translationTransform) let scale = max(dstWidth/(dstImage.extent.width), CGFloat(dstHeight/dstImage.extent.height)) let scalingTransform = CGAffineTransform(scaleX: scale, y: scale) transform = CGAffineTransform.identity transform = transform.concatenating(scalingTransform) dstImage = dstImage.transformed(by: transform) if flipVertical { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: 1, y: -1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: 0, y: dstImage.extent.size.height)) } if flipHorizontal { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: -1, y: 1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: dstImage.extent.size.width, y: 0)) } var dstBounds = CGRect.zero dstBounds.size = dstImage.extent.size _ciContext.render(dstImage, to: dstPixelBuffer!, bounds: dstImage.extent, colorSpace: srcImage.colorSpace ) Here is how CIContext was created: _ciContext = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!, options: [CIContextOption.cacheIntermediates: false]) I want to know if I am doing anything wrong and what could be done to lower CPU usage in CoreImage?
4
1
2.0k
Oct ’23
iOS 15 RemoteIO unit silent microphone samples with builtin mic
On an iPhone 12 mini running iOS 15 beta 3, the microphone samples with builtin microphone are all silent in RemoteIO unit callbacks. This happens when AVAudioSession preferredInputSource is set to back or front mic. It is NOT seen when preferred input source is set to bottom mic or with external microphones. Is this a known bug on iOS 15?
3
0
1.2k
Jul ’21
Selecting Metal 3.2 as language causes crash on iPhone 11 Pro (iOS 17.1.1)
XCode 16 seems to have an issue with stitchable kernels in Core Image which gives build errors as stated in this question. As a workaround, I selected Metal 3.2 as Metal Language Revision in XCode project. It works on newer devices like iPhone 13 pro and above but metal texture creation fails on older devices like iPhone 11 pro. Is this a known issue and is there a workaround? I tried selecting Metal language revision to 2.4 but the same build errors occur as reported in this question. Here is the code where assertion failure happens on iPhone 11 Pro. let vertexShader = library.makeFunction(name: "vertexShaderPassthru") let fragmentShaderYUV = library.makeFunction(name: "fragmentShaderYUV") let pipelineDescriptorYUV = MTLRenderPipelineDescriptor() pipelineDescriptorYUV.rasterSampleCount = 1 pipelineDescriptorYUV.colorAttachments[0].pixelFormat = .bgra8Unorm pipelineDescriptorYUV.depthAttachmentPixelFormat = .invalid pipelineDescriptorYUV.vertexFunction = vertexShader pipelineDescriptorYUV.fragmentFunction = fragmentShaderYUV do { try pipelineStateYUV = metalDevice?.makeRenderPipelineState(descriptor: pipelineDescriptorYUV) } catch { assertionFailure("Failed creating a render state pipeline. Can't render the texture without one.") return }
3
0
676
Oct ’24
AVCam sample code build errors in Swift 6
The AVCam sample code by Apple fails to build in Swift 6 language settings due to failed concurrency checks ((the only modification to make in that code is to append @preconcurrency to import AVFoundation). Here is a minimally reproducible sample code for one of the errors: import Foundation final class Recorder { var writer = Writer() var isRecording = false func startRecording() { Task { [writer] in await writer.startRecording() print("started recording") } } func stopRecording() { Task { [writer] in await writer.stopRecording() print("stopped recording") } } func observeValues() { Task { for await value in await writer.$isRecording.values { isRecording = value } } } } actor Writer { @Published private(set) public var isRecording = false func startRecording() { isRecording = true } func stopRecording() { isRecording = false } } The function observeValues gives an error: Non-sendable type 'Published<Bool>.Publisher' in implicitly asynchronous access to actor-isolated property '$isRecording' cannot cross actor boundary I tried everything to fix it but all in vain. Can someone please point out if the architecture of AVCam sample code is flawed or there is an easy fix?
3
0
492
Jan ’25
SwiftUI scroll position targeting buggy with viewAligned scrollTargetBehavior
I have a discrete scrubber implementation (range 0-100) using ScrollView in SwiftUI that fails on the end points. For instance, scrolling it all the way to bottom shows a value of 87 instead of 100. Or if scrolling down by tapping + button incrementally till it reaches the end, it will show the correct value of 100 when it reaches the end. But now, tapping minus button doesn't scrolls the scrubber back till minus button is clicked thrice. I understand this has only to do with scroll target behaviour of .viewAligned but don't understand what exactly is the issue, or if its a bug in SwiftUI. import SwiftUI struct VerticalScrubber: View { var config: ScrubberConfig @Binding var value: CGFloat @State private var scrollPosition: Int? var body: some View { GeometryReader { geometry in let verticalPadding = geometry.size.height / 2 - 8 ZStack(alignment: .trailing) { ScrollView(.vertical, showsIndicators: false) { VStack(spacing: config.spacing) { ForEach(0...(config.steps * config.count), id: \.self) { index in horizontalTickMark(for: index) .id(index) } } .frame(width: 80) .scrollTargetLayout() .safeAreaPadding(.vertical, verticalPadding) } .scrollTargetBehavior(.viewAligned) .scrollPosition(id: $scrollPosition, anchor: .top) Capsule() .frame(width: 32, height: 3) .foregroundColor(.accentColor) .shadow(color: .accentColor.opacity(0.3), radius: 3, x: 0, y: 1) } .frame(width: 100) .onAppear { DispatchQueue.main.async { scrollPosition = Int(value * CGFloat(config.steps)) } } .onChange(of: value, { oldValue, newValue in let newIndex = Int(newValue * CGFloat(config.steps)) print("New index \(newIndex)") if scrollPosition != newIndex { withAnimation { scrollPosition = newIndex print("\(scrollPosition)") } } }) .onChange(of: scrollPosition, { oldIndex, newIndex in guard let pos = newIndex else { return } let newValue = CGFloat(pos) / CGFloat(config.steps) if abs(value - newValue) > 0.001 { value = newValue } }) } } private func horizontalTickMark(for index: Int) -> some View { let isMajorTick = index % config.steps == 0 let tickValue = index / config.steps return HStack(spacing: 8) { Rectangle() .fill(isMajorTick ? Color.accentColor : Color.gray.opacity(0.5)) .frame(width: isMajorTick ? 24 : 12, height: isMajorTick ? 2 : 1) if isMajorTick { Text("\(tickValue * 5)") .font(.system(size: 12, weight: .medium)) .foregroundColor(.primary) .fixedSize() } } .frame(maxWidth: .infinity, alignment: .trailing) .padding(.trailing, 8) } } #Preview("Vertical Scrubber") { struct VerticalScrubberPreview: View { @State private var value: CGFloat = 0 private let config = ScrubberConfig(count: 20, steps: 5, spacing: 8) var body: some View { VStack { Text("Vertical Scrubber (0–100 in steps of 5)") .font(.title2) .padding() HStack(spacing: 30) { VerticalScrubber(config: config, value: $value) .frame(width: 120, height: 300) .background(Color(.systemBackground)) .border(Color.gray.opacity(0.3)) VStack { Text("Current Value:") .font(.headline) Text("\(value * 5, specifier: "%.0f")") .font(.system(size: 36, weight: .bold)) .padding() HStack { Button("−5") { let newValue = max(0, value - 1) if value != newValue { value = newValue UISelectionFeedbackGenerator().selectionChanged() } print("Value \(newValue), \(value)") } .disabled(value <= 0) Button("+5") { let newValue = min(CGFloat(config.count), value + 1) if value != newValue { value = newValue UISelectionFeedbackGenerator().selectionChanged() } print("Value \(newValue), \(value)") } .disabled(value >= CGFloat(config.count)) } .buttonStyle(.bordered) } } Spacer() } .padding() } } return VerticalScrubberPreview() }
3
0
166
Jul ’25