I want to know are depth map and RGB image are perfectly aligned(do both have the same principle point)? If yes then how the depth- map is created.
The depth map on iphone12 has 256x192 resolution as opposed to an RGB image (1920x1440). I am interested in exact pixel-wise depth. Is it possible to get the raw depth map of 1920x1440 resolution ?
How is the depth-map is created at 256 x 192 resolution? Behind the scenes does the pipeline captures it at 1920 x1440 resolution and then resize it to 256x192?
I have so many questions as there are no intrinsic, extrinsic, and calibration data given regarding the lidar.
I would greatly appreciate it if someone can explain the steps from a computer-vision perspective.
Many Thanks
ARKit
RSS for tagIntegrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
since iOS 15 I've repeatedly noticed the console warning »ARSessionDelegate is retaining X ARFrames. This can lead to future camera frames being dropped« even for rather simple projects using RealityKit and ARKit. Could someone from the ARKit team please elaborate what causes this warning and what can be done to avoid it?
If I remember correctly I didn't even assign an ARSessionDelegate.
Thank you!
Hi everyone! I am working on AR app and wanted to implement object occlusion because it removes drift pretty much from the object. This working great with RealityKit sample But I am unable to replicate such behaviour it with scenekit. Because scenekit does not offer object occlusion. Can we say scenekit is getting depricated, and we should re-write app in RealityKit (which is obviously a big task)?
I’d like to use ARKit world tracking and display both the back camera feed and the front camera feeds, using the front feed as as a PIP. This would work great for an internet streaming use case.
However, it’s impossible. As soon as ARKit is told to use one mode, the camera for the other side freezes/doesn’t work. This page also says you have to pick one camera to show: https://developer.apple.com/documentation/arkit/arkit_in_ios/choosing_which_camera_feed_to_augment?language=objc
A question to the developers: why is this limitation in-place? Are there any work-arounds for the use case of ARKit world tracking + displaying the back camera feed + displaying the front camera feed as an overlay?
It’s possible to do this with plain camera initialization without ARKit. (There’s an official example.) With ARKit, it no longer works.
It’s strange that I cannot access the front feed via one of the other frameworks, but I guess that ARKit blocks that.
How do we author a Reality File like the ones under Examples with animations at https://developer.apple.com/augmented-reality/quick-look/
??
For example, "The Hab" : https://developer.apple.com/augmented-reality/quick-look/models/hab/hab_en.reality
Tapping on various buttons in this experience triggers various complex animations. I don't see any way to accomplish this in Reality Composer.
And I don't see any way to export/compile to a "reality file" from within Xcode.
How can I use multiple animations within a single GLTF file?
How can I set up multiple "tap target" on a single object, where each one triggers a different action?
How do we author something similar? What tools do we use?
Thanks
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces.
This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there?
I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
Hello Community,
I'm encountering an issue with the latest iOS 17 update, specifically related to RoomPlan version-2. In iOS 16, when using RoomPlan version-1, we were able to display stairs in our app. However, after upgrading to iOS 17 and implementing RoomPlan version-2, the stairs are no longer visible.
Despite thorough investigation, I couldn't find any option within the code to show or hide stairs, or any other objects for that matter. It seems like a specific issue with the update rather than a coding error on our part.
Has anyone else encountered a similar problem? If so, I would greatly appreciate any insights or solutions you might have. It's crucial for our app functionality to have stairs displayed accurately, and we're currently at a loss on how to address this issue.
Thank you in advance for any assistance you can provide.
Best regards
I am planning to build a VisionOS app and need to get access to the persona (avatar). I have not found any information regarding integration possibilities in the docs. Does anyone know if and how I can access the user's persona?
Other applications like Zoom and Teams for VisionOS use the persona, so I think it is basically possible. Apparently (if it's not fake) there is also a chess game with integrated persona: https://www.youtube.com/watch?v=mMzK8C3t14I
Any help is very welcome, thanks.
I tested the new visionOS object tracking and it worked really well.
I have created a reference object using Create ML and it really detected the object.
My question is: does it work also with iOS and, if not right now, is it planned to work in mobile iOS in the future?
Topic:
Spatial Computing
SubTopic:
ARKit
When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code:
RealityView { content in
if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(model)
}
}
Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add?
Note:I have seen https://developer.apple.com/videos/play/wwdc2024/10101/, but I don't know much about it.
Hello All,
I'm desperate to found a solution and I need your help please.
I've create a simple cube in Vision OS. I can get it by hand (close my hand on it) and move it pretty where I want. But, I would like to throw it (exemple like a basket ball). Not push it, I want to have it in hand and throw it away of me with a velocity and direction = my hand move (and finger opened to release it).
Please put me on the wait to do that.
Cheers and thanks
Mathis
Topic:
Spatial Computing
SubTopic:
ARKit
We are experiencing a crash when attempting to capture a high resolution frame. The crash only happens on the A12 and A12X devices (iPhone11,8 or iPad8,7). The crash did occur on older versions of iOS and continues to happen with more recent versions, iOS 17.6 and iPadOS 17.5.1.
Any ideas on how to get a high-resolution image & frame from an AR Session using these devices?
To reproduce the crash:
Use an A12 or A12X device. Other devices have not produced the same result.
Setup an AR session that can capture high resolution static images:
Create a class that conforms to UIViewController and ARSessionDelegate and connect it to an Interface with an ARView.
@IBOutlet var arView: ARView!
During view setup configure the AR session:
let configuration = ARBodyTrackingConfiguration()
if let hiResFormat = ARBodyTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing {
configuration.videoFormat = hiResFormat
}
self.arView.session.run(configuration)
Trigger code to capture the high resolution image (we call this from a button @IBAction:
Task {
let hiResFrame = try? await self.arView.session.captureHighResolutionFrame()
print("crash above. this print never occurs")
}
The crash occurs with both the sync & async versions of the captureHighResolutionFrame call.
Code above crashes in Apple AltruisticBodyPoseKit :
0 libsystem_kernel.dylib 0xc2ec __pthread_kill + 8
1 libsystem_pthread.dylib 0x7c0c pthread_kill + 268
2 libsystem_c.dylib 0x75ba0 abort + 180
3 libsystem_c.dylib 0x74eac err + 282
4 AltruisticBodyPoseKit 0x4b2b0 cva::MatrixData<int, 0ul, 0ul, false>::allocate(unsigned long) (.cold.1) + 42
5 AltruisticBodyPoseKit 0x20dac std::__1::vector<std::__1::pair<cva::Matrix<double, 3u, 1u, false>, cva::Matrix<double, 2u, 1u, false> >, std::__1::allocator<std::__1::pair<cva::Matrix<double, 3u, 1u, false>, cva::Matrix<double, 2u, 1u, false> > > >::vector(unsigned long) + 1186
6 AltruisticBodyPoseKit 0x2058c btr::(anonymous namespace)::EstimatePoseFromCorrespondences(btr::CameraPoseInfo&, btr::Correspondences2d3d const&, bool) + 564
7 AltruisticBodyPoseKit 0x2018c btr::BodyRegistration::RegisterBody(float vector[2] const*, unsigned long, float vector[3] const*, unsigned long, simd_float4x4 const*, unsigned long, simd_float3x3 const*, simd_float4x4 const*) + 1228
8 AltruisticBodyPoseKit 0x4354c -[ABPKCameraRegistration estimateCameraPoseFromMatchingwithImageIntrinsics:imageResolution:joints2d:jointsLifted3D:jointsLifted3DCount:] + 1160
9 ARKitCore 0x131d30 -[AR3DSkeletonRegistrationTechnique _estimateCameraPoseFromMatchingImageData:to3DData:worldTrackingPose:pCameraFromBody:depthData:pScaleOut:] + 396
10 ARKitCore 0x131818 -[AR3DSkeletonRegistrationTechnique requestResultDataAtTimestamp:context:] + 388
11 ARKitCore 0x91fd8 -[ARParentTechnique technique:didOutputResultData:timestamp:context:onTechniques:] + 1400
12 ARKitCore 0x91a28 -[ARParentTechnique technique:didOutputResultData:timestamp:context:] + 112
13 ARKitCore 0x8150c -[ARExposureLightEstimationTechnique requestResultDataAtTimestamp:context:] + 352
14 ARKitCore 0x91fd8 -[ARParentTechnique technique:didOutputResultData:timestamp:context:onTechniques:] + 1400
15 ARKitCore 0x91a28 -[ARParentTechnique technique:didOutputResultData:timestamp:context:] + 112
16 ARKitCore 0xcd9c4 -[ARWorldAlignmentTechnique requestResultDataAtTimestamp:context:] + 1044
17 ARKitCore 0x91fd8 -[ARParentTechnique technique:didOutputResultData:timestamp:context:onTechniques:] + 1400
18 ARKitCore 0x91a28 -[ARParentTechnique technique:didOutputResultData:timestamp:context:] + 112
19 ARKitCore 0x92584 -[ARParentTechnique _submitResultsForTimestamp:context:] + 396
20 ARKitCore 0x90124 __71-[ARParentTechnique requestResultDataAtTimestamp:context:onTechniques:]_block_invoke_3 + 72
Topic:
Spatial Computing
SubTopic:
ARKit
Hello there,
I'm currently working on a Hand Tracking System. I've already placed some spheres on some joint points on the left and right hand. Now I want to access the translation/position value of these entities in the update(context: Scene) function. Now my question is, is it possible to access them via .handAnchors(), or which types of .handSkeleton.joint(name) are referencing the same entity? (E.g. is AnchorEntity(.hand(.right, location: .indexFingerTip)) the same as handSkeleton.joint(.indexFingerTip). The goal would be to access the translation of the joints where a sphere has been placed per hand and to be able to update the data every frame through the update(context) function.
I would very much appreciate any help!
See code example down below:
ImmersiveView.swift
import SwiftUI
import RealityKit
import ARKit
struct ImmersiveView: View {
public var body: some View {
RealityView { content in
/* HEAD */
let headEntity = AnchorEntity(.head)
content.add(headEntity)
/* LEFT HAND */
let leftHandWristEntity = AnchorEntity(.hand(.left, location: .wrist))
let leftHandIndexFingerEntity = AnchorEntity(.hand(.left, location: .indexFingerTip))
let leftHandWristSphere = ModelEntity(mesh: .generateSphere(radius: 0.02), materials: [SimpleMaterial(color: .red, isMetallic: false)])
let leftHandIndexFingerSphere = ModelEntity(mesh: .generateSphere(radius: 0.01), materials: [SimpleMaterial(color: .orange, isMetallic: false)])
leftHandWristEntity.addChild(leftHandWristSphere)
content.add(leftHandWristEntity)
leftHandIndexFingerEntity.addChild(leftHandIndexFingerSphere)
content.add(leftHandIndexFingerEntity)
}
}
}
TrackingSystem.swift
import SwiftUI
import simd
import ARKit
import RealityKit
public class TrackingSystem: System {
static let query = EntityQuery(where: .has(AnchoringComponent.self))
private let arKitSession = ARKitSession()
private let worldTrackingProvider = WorldTrackingProvider()
private let handTrackingProvider = HandTrackingProvider()
public required init(scene: RealityKit.Scene) {
setUpSession()
}
private func setUpSession() {
Task {
do {
try await arKitSession.run([worldTrackingProvider, handTrackingProvider])
} catch {
print("Error: \(error)")
}
}
}
public func update(context: SceneUpdateContext) {
guard worldTrackingProvider.state == .running && handTrackingProvider.state == .running else { return }
let _ = context.entities(matching: Self.query, updatingSystemWhen: .rendering)
if let avp = worldTrackingProvider.queryDeviceAnchor(atTimestamp: currentTime) {
let hands = handTrackingProvider.handAnchors(at: currentTime)
...
}
}
}
Is there anyway to reset the scan memory Vision Pro stores on-device, so that upon every new scanning in my application, it starts from scratch rather than getting instantly recognized. In Apple Vision Pro Privacy overview (https://www.apple.com/privacy/docs/Apple_Vision_Pro_Privacy_Overview.pdf), it is stated:
"visionOS builds a three-dimensional model to map your surroundings on-device. Apple Vision Pro uses a combination of camera and LiDAR data to map the area around you and save that model on-device. The model enables visionOS to alert you about real-life obstacles, as well as appropriately reflect the lighting and shadows of your physical space. visionOS uses audio ray tracing to analyze your room’s acoustic properties on-device to adapt and match sound to your space. The underlying scene mesh is stored on-device and encrypted with your passcode if one is set"
How to access and erase the, and I quote, “underlying scene mesh stored on-device”?
Hello,
Has anyone had success with implementing object tracking in Unity or adding native tracking capability to the VisionOS project built from Unity?
I am working on an application for Vision Pro mainly in Unity using Polyspatial. The application requires me to track objects and make decisions based on tracked object's location. I was able to create an object tracking application on Native Swift, but could not successfully combine this with my Unity project yet. Each separate project (Main Unity app using Polyspatial and the native app on Swift) can successfully build and be deployed onto VisionPro.
I know that Polyspatial and ARFoundation does not have support for ARKit's object tracking feature for VIsion Pro as of today, and they only support image tracking inside Unity. For that reason I have been exploring different ways of creating a bridge for two way interaction of the native tracking functionality and the other functionality in Unity.
Below are the methods I tried and failed so far:
Package the tracking functionality as a Swift Plugin and access this in Unity, and then build for Vision Pro: I can create packages and access them for simple exposed variables and methods, but not for outputs and methods from ARKit, which throw dependency errors while trying to make the swift package.
Build project from Unity to VIsion Pro and expose a boolean to start/stop tracking that can be read by the native code, and then carry the tracking classes into the built project. In this approach I keep getting an error that says _TrackingStateChanged cannot be found, which is the class that exposes the bool toggled by the Unity button press:
using System.Runtime.InteropServices;
public class UnityBridge
{
[DllImport("__Internal")]
private static extern void TrackingStateChanged(bool isTracking);
public static void NotifyTrackingState()
{
// Call the Swift method
TrackingStateChanged(TrackingStartManager.IsTrackingActive());
}
}
This seems to be translated to C++ code in the ill2cpp output from Unity, and even though I made sure that all necessary packages were added to the target, I keep receiving this error. from the UnityFramework plugin:
Undefined symbol: _TrackingStateChanged
I have considered extending the current Image Tracking approach in ARFoundation to include object tracking, but that seems to be too complicated for my use case and time frame for now.
The final resort will be to forego Unity implementation and do everything in native code. However, I really want to be able to use Unity's conveniences and I have very limited experience with Swift development.
Hello
We are exploring the iOS 17 RoomPlan updates that allow for a custom ARSession to be passed into the RoomCaptureSession via the new initializer.
let roomCaptureSession = RoomCaptureSession(arSession: myARSession)
Currently we use our ARSession to extract sceneDepth from the ARFrames via the delegate callback. This works prior to activation of the RoomCaptureSession via session.run(configuration).
However, when we do call run on the RoomCaptureSession, sceneDepth is no longer present on the incoming ARFrames.
Are these mutually exclusive? Should we expect ARFrame depth data to be present when a RoomCaptureSession is running with the shared ARSession?
Devices running iOS 18 using RealityKit do not seem to receive lighting supplied via ARKit Environment Texturing (https://developer.apple.com/documentation/arkit/arworldtrackingconfiguration/2977509-environmenttexturing).
Instead just a default IBL is used by RealityKit.
This happens with RealityView as well as ARView.
It also happens when I explicitly opt-in to environment texturing:
let worldTrackingConfig = ARWorldTrackingConfiguration()
worldTrackingConfig.environmentTexturing = .automatic
arView.session.run(worldTrackingConfig)
Even the Xcode AR Template has this issue.
I'm attaching a screenshot of the sample app running on iOS 18 where it's broken and from iOS 17 where it works as expected.
I hope this can get resolved quickly since I see it as a major regression.
Feedback ID: FB15091335
UPDATE:
It works on my older iPhone XS (iOS 18 22A5282m)
Broken on iPad Pro (11-inch) (3rd generation) (iPadOS 18.0 (22A5350a))
Maybe it's related to LiDAR?
Thank you!
iOS 17 (works):
iOS 18 (broken):
I would like to implement the following but I am not sure if this is a supported use case based on the current documentation:
Run one ARKitSession with a WorldTrackingProvider in Swift for mixed immersion Metal rendering (to get the device anchor for the layer renderer drawable & view matrix)
Run another ARKitSession with a WorldTrackingProvider and a CameraFrameProvider in a different library (that is part of the same app) using the ARKit C API and using the transforms from the anchors in that session to render objects in the Swift application part.
In general, is this a supported use case or is it necessary to have one shared ARKitSession?
Assuming this is supported, will the (device) anchors from both WorldTrackingProviders reference the same world coordinate system?
Are there any performance downsides to having multiple ARKitSessions?
Thanks
Hello, I'm using the ARSessionDelegate function:
func session(_ session: ARSession, didUpdate frame: ARFrame)
to extract an HD Image
let hdframe = try? await session.captureHighResolutionFrame().capturedImage
which I am later on use to detect text on the image using VN. I'm using the HD picture, because the text bits I'm looking for can be very tiny.
let requestHandler = VNImageRequestHandler(cgImage: image)//, orientation: .up, options: [:])
let textRequest = VNRecognizeTextRequest()
let vnRequests = [textRequest]
try requestHandler.perform(vnRequests)
My issue is that, each time a captured HD image is extracted from the AR Scene, a shutter sound is played. I'm aware that shutter sounds are important for privacy, but I'm doing this in a very high frequency, which means that my app is currently unusable, when not muted.
My two questions are:
Is there any way to disable the sound in this case?
Is there a better way to constantly scan the AR video stream for text than this approach?
Topic:
Spatial Computing
SubTopic:
ARKit
In lots of houses there are different levels but are still on the same floor. What i mean is that there are things like stairs on the entrance that only have a few steps and would count basically as the same story.
RoomPlan already does a nice job recognizing them during the scanning but after the StructureBuilder or the optimization step it is not really satisfying.
Has anyone managed to handle those cases? Or do you have to scan a specific way to capture such small differences within a level?