I have an iOS application view that contains an AVCaptureSession, AVCaptureVideoPreviewLayer (created with the AVCaptureSession), and a UIImageView (in the backend the app takes the output of the AVCaptureSession, runs it through a Semantic Segmentation model, and displays the output in the UIImageView).
When I pause the app and run the “Debug View Hierarchy”, it shows the UIImageView, the relevant buttons and labels.
However, it does not seem to show AVCaptureVideoPreviewLayer that I have set up in my application.
Is there some special set up that needs to be done to be able to view Camera Related features?
The following is part of the view code, a component that is used to render the AVCaptureVideoPreviewLayer (not sure if this is enough, please let me know if its not):
class CameraViewController: UIViewController {
var session: AVCaptureSession?
var frameRect: CGRect = CGRect()
var rootLayer: CALayer! = nil
private var previewLayer: AVCaptureVideoPreviewLayer! = nil
init(session: AVCaptureSession) {
self.session = session
super.init(nibName: nil, bundle: nil)
}
required init?(coder: NSCoder) {
super.init(coder: coder)
}
override func viewDidLoad() {
super.viewDidLoad()
setUp(session: session!)
}
private func setUp(session: AVCaptureSession) {
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
previewLayer.frame = self.frameRect
DispatchQueue.main.async { [weak self] in
self!.view.layer.addSublayer(self!.previewLayer)
//self!.view.layer.addSublayer(self!.detectionLayer)
}
}
}
struct HostedCameraViewController: UIViewControllerRepresentable{
var session: AVCaptureSession!
var frameRect: CGRect
func makeUIViewController(context: Context) -> CameraViewController {
let viewController = CameraViewController(session: session)
viewController.frameRect = frameRect
return viewController
}
func updateUIViewController(_ uiView: CameraViewController, context: Context) {
}
}
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
So it seems to be that there is a contradiction between how ARKit defines UIDeviceOrientation.landscapeRight, and the actual definition of UIDeviceOrientation.landscapeRight in the UIKit documentation.
In the ARKit documentation for ARCamera.transform, it says the following:
This transform creates a local coordinate space for the camera that is constant with respect to device orientation. In camera space, the x-axis points to the right when the device is in UIDeviceOrientation.landscapeRight orientation—that is, the x-axis always points along the long axis of the device, from the front-facing camera toward the Home button. The y-axis points upward (with respect to UIDeviceOrientation.landscapeRight orientation), and the z-axis points away from the device on the screen side.
Going through the same link, we see the definition of UIDeviceOrientation.landscapeRight given as:
The device is in landscape mode, with the device held upright and the front-facing camera on the right side.
There seems to be a conflict in the two definitions, that has already been asked and visualized in this StackOverflow thread
The resolution of that answer says that ARKit landscapeRight, unlike what is given in UIDeviceOrientation.landscapeRight, has home button on the right, as stated in the ARCamera.transform documentation.
It says that more details are given in this StackOverflow thread, but this thread talks about the discrepancy between the definitions of landscapeRight in UIDeviceOrientation and UIInterfaceOrientation, and not anything related to ARKit.
So I am wondering, why does ARKit definition of landscapeRight contradict with that of UIDeviceOrientation despite explicitly mentioning it? Is it just a mistake by Apple developers that hasn't been resolved even after so long?
Hi all,
I'm working on an ARKit-based iOS app where I need to accurately determine the direction the device is facing to localize objects in the real world. I'm using:
let config = ARWorldTrackingConfiguration()
config.worldAlignment = .gravityAndHeading
Thus, I would expect the world alignment to behave as given in the gravityAndHeading page.
The AR session is started after verifying that CLLocationManager.headingAccuracy <= 20, and the compass appears to be calibrated.
However, I'm seeing a major inconsistency:
When the rear camera is physically pointed toward true North, I would expect:
cameraTransform.columns.2.z ≈ -1 // (i.e. ARKit's -Z pointing North)
But instead, I'm consistently seeing:
cameraTransform.columns.2.z ≈ +0.97 // Implies camera is facing South
Meanwhile, the translation vector behaves as expected:
As I physically move North, cameraTransform.columns.3.z becomes more negative, matching the world’s +Z = South assumption.
For example, let's say I have the device in landscapeRight (or landscapeLeft for UIDeviceOrientation). Let's say the device rear camera is pointing towards True North, and I start moving towards True North. I get something like this:
Camera Transform = simd_float4x4(
[
[0.98446155, -0.030119859, 0.172998, 0.0],
[0.023979114, 0.9990097, 0.037477385, 0.0],
[-0.17395553, -0.032746706, 0.98420894, 0.0],
[0.024039675, -0.037087332, -0.22780673, 0.99999994]
])
As you can see, the cameraTransform.columns.2.z is positive despite the rear camera pointing towards True North, while cameraTransform.columns.3.z is correctly positive as the device is moving towards True North.
So here is my question:
Why is cameraTransform.columns.2.z positive when the rear camera is physically facing North?
Any clarity would be deeply appreciated. I've read the documentation and tested with different heading accuracies and AR session resets, but I keep running into this orientation mismatch.
Thanks in advance!
Hello All,
I’m working on a computer-vision–heavy iOS application that uses the camera, LiDAR depth maps, and semantic segmentation to reason about the environment (object identification, localization and measurement - not just visualization).
Current architecture
I initially built the image pipeline around CIImage as a unifying abstraction. It seemed like a good idea because:
CIImage integrates cleanly with Vision, ARKit, AVFoundation, Metal, Core Graphics, etc.
It provides a rich set of out-of-the-box transforms and filters.
It is immutable and thread-safe, which significantly simplified concurrency in a multi-queue pipeline.
The LiDAR depth maps, semantic segmentation masks, etc. were treated as CIImages, with conversion to CVPixelBuffer or MTLTexture only at the edges when required.
Problem
I’ve run into cases where Core Image transformations do not preserve numeric fidelity for non-visual data.
Example:
Rendering a CIImage-backed segmentation mask into a larger CVPixelBuffer can cause label values to change in predictable but incorrect ways.
This occurs even when:
using nearest-neighbor sampling
disabling color management (workingColorSpace / outputColorSpace = NSNull)
applying identity or simple affine transforms
I’ve confirmed via controlled tests that:
Metal → CVPixelBuffer paths preserve values correctly
CIImage → CVPixelBuffer paths can introduce value changes when resampling or expanding the render target
This makes CIImage unsafe as a source of numeric truth for segmentation masks and depth-based logic, even though it works well for visualization, and I should have realized this much sooner.
Direction I’m considering
I’m now considering refactoring toward more intent-based abstractions instead of a single image type, for example:
Visual images: CIImage (camera frames, overlays, debugging, UI)
Scalar fields: depth / confidence maps backed by CVPixelBuffer + Metal
Label maps: segmentation masks backed by integer-preserving buffers (no interpolation, no transforms)
In this model, CIImage would still be used extensively — but primarily for visualization and perceptual processing, not as the container for numerically sensitive data.
Thread safety concern
One of the original advantages of CIImage was that it is thread-safe by design, and that was my biggest incentive.
For CVPixelBuffer / MTLTexture–backed data, I’m considering enforcing thread safety explicitly via:
Swift Concurrency (actor-owned data, explicit ownership)
Questions
For those may have experience with CV / AR / imaging-heavy iOS apps, I was hoping to know the following:
Is this separation of image intent (visual vs numeric vs categorical) a reasonable architectural direction?
Do you generally keep CIImage at the heart of your pipeline, or push it to the edges (visualization only)?
How do you manage thread safety and ownership when working heavily with CVPixelBuffer and Metal? Using actor-based abstractions, GCD, or adhoc?
Are there any best practices or gotchas around using Core Image with depth maps or segmentation masks that I should be aware of?
I’d really appreciate any guidance or experience-based advice. I suspect I’ve hit a boundary of Core Image’s design, and I’m trying to refactor in a way that doesn't involve too much immediate tech debt, remains robust and maintainable long-term.
Thank you in advance!