We are working on a world scale AR app that leverages the device location and heading to place objects in the streets, so that they are correctly and stably anchored to certain locations.
Since the geo-tracking imagery is only available in certain cities and areas, we are trying to figure out how to fallback when geo-tracking is not available as the device move away, to still retain good AR camera accuracy. We might need to come up with some algorithm using the device GPS, to line up the ARCamera with our objects.
Question: Does geo-tracking always provide greater than or equal to the accuracy of world tracking, for a GPS outdoor AR experience?
If so, we can simply use the ARGeoTrackingConfiguration for the entire time, and rely on the ARView keeping itself aligned. Otherwise, we need to switch between it and ARWorldTrackingConfiguration when geo-tracking is not available and/or its accuracy is low, then roll our own algorithm to keep the camera aligned.
Thanks.
ARKit
RSS for tagIntegrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces.
This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there?
I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
Devices running iOS 18 using RealityKit do not seem to receive lighting supplied via ARKit Environment Texturing (https://developer.apple.com/documentation/arkit/arworldtrackingconfiguration/2977509-environmenttexturing).
Instead just a default IBL is used by RealityKit.
This happens with RealityView as well as ARView.
It also happens when I explicitly opt-in to environment texturing:
let worldTrackingConfig = ARWorldTrackingConfiguration()
worldTrackingConfig.environmentTexturing = .automatic
arView.session.run(worldTrackingConfig)
Even the Xcode AR Template has this issue.
I'm attaching a screenshot of the sample app running on iOS 18 where it's broken and from iOS 17 where it works as expected.
I hope this can get resolved quickly since I see it as a major regression.
Feedback ID: FB15091335
UPDATE:
It works on my older iPhone XS (iOS 18 22A5282m)
Broken on iPad Pro (11-inch) (3rd generation) (iPadOS 18.0 (22A5350a))
Maybe it's related to LiDAR?
Thank you!
iOS 17 (works):
iOS 18 (broken):
I was watching the Developer videos, and there was mention that RealityView handles persistent world data differently and also automatically for us.
I am having an issue finding the material I need to get up to speed on that.
In ARKit, I was able to place a model with the world data and recall that .map data. It even stored a reference image for the scene to help match the world data.
I'm looking for the information on how to implement and work with those same features with RealityView, as it seems to be better/automatically integrated?
I need help being pointed in the right direction. Sample code would be amazing.
Topic:
Spatial Computing
SubTopic:
ARKit
I'm trying to implement a prototype to render virtual objects in a mixed immersive space on the camer frames captured by CameraFrameProvider.
Here are what I have done:
Get camera's instrinsics from frame.primarySample.parameters.intrinsics
Get camera's extrinsics from frame.primarySample.parameters.extrinsics
Get the device anchor by worldTrackingProvider.queryDeviceAnchor(atTimestamp: CACurrentMediaTime())
Setup a RealityKit.RealityRenderer to render virtual objects on the captured camera frames
let realityRenderer = try RealityKit.RealityRenderer()
realityRenderer.cameraSettings.colorBackground = .outputTexture()
let cameraEntity = PerspectiveCamera()
// see https://developer.apple.com/forums/thread/770235
let cameraTransform = deviceAnchor.originFromAnchorTransform * extrinsics.inverse
cameraEntity.setTransformMatrix(cameraTransform, relativeTo: nil)
cameraEntity.camera.near = 0.01
cameraEntity.camera.far = 100
cameraEntity.camera.fieldOfViewOrientation = .horizontal
// manually calculated based on camera intrinsics
cameraEntity.camera.fieldOfViewInDegrees = 105
realityRenderer.entities.append(cameraEntity)
realityRenderer.activeCamera = cameraEntity
Virtual objects, which should be seen in the camera frames, are clipped out by the camera transform.
If I use deviceAnchor.originFromAnchorTransform as the camera transform, virtual objects can be rendered on camera frames at wrong positions (I think it is because the camera extrinsics isn't used to adjust the camera to the correct position).
My question is how to use the camera extrinsic matrix for this purpose?
Does the camera extrinsics point to a similar orientation of the device anchor with some minor rotation and postion change? Here is an extrinsics from a camera frame. It seems that the direction of Y-axis and Z-axis are flipped by the extrinsics. So the camera is point to a wrong direction.
simd_float4x4([[0.9914258, 0.012555369, -0.13006608, 0.0], // X-axis
[-0.0009778949, -0.9946325, -0.10346654, 0.0], // Y-axis
[-0.13066702, 0.10270659, -0.98609203, 0.0], // Z-axis
[0.024519, -0.019568002, -0.058280986, 1.0]]) // translation
I'm working on creating a panorama view in AVP. When I got to this line of code Xcode says that "Type 'Entity' does not conform to protocol 'View'":
private var realityView: RealityView!
as well as this line, with the same error message:
private func setupPanoramaScene(for content: RealityView.Content)
What should I put as a argument for reality view? It doesn't work without arguments either.
Topic:
Spatial Computing
SubTopic:
ARKit
We’re using the enterprise API for spatial barcode/QR code scanning in the Vision Pro app, but we often get invalid values for the barcode anchor from the API, leading to jittery barcode positions in the UI. The code we’re using is attached below.
import SwiftUI
import RealityKit
import ARKit
import Combine
struct ImmersiveView: View {
@State private var arkitSession = ARKitSession()
@State private var root = Entity()
@State private var fadeCompleteSubscriptions: Set = []
var body: some View {
RealityView { content in
content.add(root)
}
.task {
// Check if barcode detection is supported; otherwise handle this case.
guard BarcodeDetectionProvider.isSupported else { return }
// Specify the symbologies you want to detect.
let barcodeDetection = BarcodeDetectionProvider(symbologies: [.code128, .qr, .upce, .ean13, .ean8])
do {
try await arkitSession.requestAuthorization(for: [.worldSensing])
try await arkitSession.run([barcodeDetection])
print("Barcode scanning started")
for await update in barcodeDetection.anchorUpdates where update.event == .added {
let anchor = update.anchor
// Play an animation to indicate the system detected a barcode.
playAnimation(for: anchor)
// Use the anchor's decoded contents and symbology to take action.
print(
"""
Payload: \(anchor.payloadString ?? "")
Symbology: \(anchor.symbology)
""")
}
} catch {
// Handle the error.
print(error)
}
}
}
// Define this function in ImmersiveView.
func playAnimation(for anchor: BarcodeAnchor) {
guard let scene = root.scene else { return }
// Create a plane sized to match the barcode.
let extent = anchor.extent
let entity = ModelEntity(mesh: .generatePlane(width: extent.x, depth: extent.z), materials: [UnlitMaterial(color: .green)])
entity.components.set(OpacityComponent(opacity: 0))
// Position the plane over the barcode.
entity.transform = Transform(matrix: anchor.originFromAnchorTransform)
root.addChild(entity)
// Fade the plane in and out.
do {
let duration = 0.5
let fadeIn = try AnimationResource.generate(with: FromToByAnimation<Float>(
from: 0,
to: 1.0,
duration: duration,
isAdditive: true,
bindTarget: .opacity)
)
let fadeOut = try AnimationResource.generate(with: FromToByAnimation<Float>(
from: 1.0,
to: 0,
duration: duration,
isAdditive: true,
bindTarget: .opacity))
let fadeAnimation = try AnimationResource.sequence(with: [fadeIn, fadeOut])
_ = scene.subscribe(to: AnimationEvents.PlaybackCompleted.self, on: entity, { _ in
// Remove the plane after the animation completes.
entity.removeFromParent()
}).store(in: &fadeCompleteSubscriptions)
entity.playAnimation(fadeAnimation)
} catch {
print("Error")
}
}
}
Hi there,
I'm trying to merge the mesh anchor into a single mesh, but couldn't find any resources on this. Here is the code where I make the mesh from each mesh anchor, and assigned it to a model component with a shader graph material.
func run(_ sceneRec: SceneReconstructionProvider) async {
for await update in sceneRec.anchorUpdates {
switch update.event {
case .added, .updated:
// Get or create entity for this anchor
let anchorEntity = anchors[update.anchor.id] ?? {
let entity = ModelEntity()
root?.addChild(entity)
anchors[update.anchor.id] = entity
return entity
}()
// Remove any existing children
for child in anchorEntity.children {
child.removeFromParent()
}
// Generate the mesh from the anchor
guard let mesh = try? await MeshResource(from: update.anchor) else { return }
guard let shape = try? await ShapeResource.generateStaticMesh(from: update.anchor) else { continue }
print("Mesh added, vertices: \(update.anchor.geometry.vertices.count), bounds: \(mesh.bounds)")
// Get the material to use
var material: RealityKit.Material
if isMaterialLoaded, let loadedMaterial = self.shaderMaterial {
material = loadedMaterial
} else {
// Use a temporary material until the shader loads
var tempMaterial = UnlitMaterial()
tempMaterial.color = .init(tint: .purple.withAlphaComponent(0.5))
material = tempMaterial
}
await MainActor.run {
anchorEntity.components.set(ModelComponent(mesh: mesh, materials: [material]))
anchorEntity.setTransformMatrix(update.anchor.originFromAnchorTransform, relativeTo: nil)
// Add collision component with static flag - required for spatial interactions
anchorEntity.components.set(CollisionComponent(
shapes: [shape],
isStatic: true,
filter: .default
))
// Make entity interactive - enables spatial taps, drags, etc.
anchorEntity.components.set(InputTargetComponent())
let shadowComponent = GroundingShadowComponent(
castsShadow: true,
receivesShadow: true
)
anchorEntity.components.set(shadowComponent)
}
I then use a spatial tap gesture to set the position parameter in the shader graph material that creates a nice gradient from the tap position on the mesh to the rest of the mesh.
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { value in
let tappedEntity = value.entity
// Check if the tapped entity is a child of tracking.meshAnchors
if isChildOfMeshAnchors(entity: tappedEntity) {
// Get local position (in the entity's coordinate space)
let localPosition = value.location3D
// Convert to world position (scene coordinate space)
let worldPosition = value.convert(localPosition, from: .local, to: .scene)
print("Tapped mesh anchor at local position: \(localPosition)")
print("Tapped mesh anchor at world position: \(worldPosition)")
// Update the material parameter with the tap position
updateMaterialTapPosition(entity: tappedEntity, position: worldPosition)
} else {
print("Tapped entity is not a mesh anchor")
}
}
}
My issue is that because there are several mesh anchors, the gradient often gets cut off by the edge of the mesh generated from the mesh anchor as suppose to a nice continuous gradient across the entire scene reconstructed mesh I couldn't find any documentations on how to merge mesh from mesh anchors, any tips would be helpful! Thank you!
The goal is to achieve precise joint tracking for clinical assessment. The Doctor is wearing the AVP and observing the Patients movement.
Do you have any recommended best practices for integrating real-time joint tracking and displaying them on the patient within visionOS?
We attempted to use VNHumanBodyPose3DObservation, which theoretically should work, but we are unable to display the detected joints in an Immersive Space for real-time validation. This makes it difficult for the doctor to ensure accurate tracking and if possible a photo or video of the Range of Motion assessment would be needed for the patient record.
Are there alternative methods to achieve precise real-time joint tracking without requiring main camera access (com.apple.developer.arkit.main-camera-access.allow)?
Hi!
I attempted to run a sample project for detecting human pose in photos, which can be found here:
https://developer.apple.com/documentation/vision/detecting-human-body-poses-in-3d-with-vision
The project works perfectly when run on my Macbook Pro M1, but it fails on Apple Vision Pro. After selecting the photo an endless loading screen is presented and the following output is produced in the console:
Failed to initialize 2D Detection Algorithm.
Failed to initialize 2D Pose Estimation Algorithm.
Failed to initialize algorithm modules
Network path is nil: (null)
Failed to initialize 2D Detection Algorithm.
Failed to initialize 2D Pose Estimation Algorithm.
Failed to initialize algorithm modules
Unable to perform the request: Error Domain=com.apple.Vision Code=9 "Async status object reported as failed but without an error" UserInfo={NSLocalizedDescription=Async status object reported as failed but without an error}.
de-activating session 70138 after timeout
It seems that VNDetectHumanBodyPose3DRequest is failing on Vision Pro for some reason. Are there any additional requirements for running Vision framework on VisionOS, that I might be missing?
I'm using ARKitSession and PlaneDetectionProvider to detect planes. I have a basics process to create an entity for each detected plane. Each one will get a random color for the material.
Each plane is sized based on the bounds of the anchor provided by ARKit.
let mesh = MeshResource.generatePlane(
width: anchor.geometry.extent.width,
depth: anchor.geometry.extent.height
)
Then I'm using this to position each entity.
entity.transform = Transform(matrix: anchor.originFromAnchorTransform)
This seems to be the right method, but many (not all) planes are not where they should be. The sizes look OK, but the X and Y positions off.
Take this large green plane on the wall. It should span the entire wall, but it is offset along the X position so that it is pushed to the left from where the center of the anchor is.
When I visualize surfaces using the Xcode debugging tools, that tool reports the planes where I'd expect them to be.
Can you see what I'm getting wrong here? Full code below
struct Example068: View {
@State var session = ARKitSession()
@State private var planeAnchors: [UUID: Entity] = [:]
@State private var planeColors: [UUID: Color] = [:]
var body: some View {
RealityView { content in
} update: { content in
for (_, entity) in planeAnchors {
if !content.entities.contains(entity) {
content.add(entity)
}
}
}
.task {
try! await setupAndRunPlaneDetection()
}
}
func setupAndRunPlaneDetection() async throws {
let planeData = PlaneDetectionProvider(alignments: [.horizontal, .vertical, .slanted])
if PlaneDetectionProvider.isSupported {
do {
try await session.run([planeData])
for await update in planeData.anchorUpdates {
switch update.event {
case .added, .updated:
let anchor = update.anchor
if planeColors[anchor.id] == nil {
planeColors[anchor.id] = generatePastelColor()
}
let planeEntity = createPlaneEntity(for: anchor, color: planeColors[anchor.id]!)
planeAnchors[anchor.id] = planeEntity
case .removed:
let anchor = update.anchor
planeAnchors.removeValue(forKey: anchor.id)
planeColors.removeValue(forKey: anchor.id)
}
}
} catch {
print("ARKit session error \(error)")
}
}
}
private func generatePastelColor() -> Color {
let hue = Double.random(in: 0...1)
let saturation = Double.random(in: 0.2...0.4)
let brightness = Double.random(in: 0.8...1.0)
return Color(hue: hue, saturation: saturation, brightness: brightness)
}
private func createPlaneEntity(for anchor: PlaneAnchor, color: Color) -> Entity {
let mesh = MeshResource.generatePlane(
width: anchor.geometry.extent.width,
depth: anchor.geometry.extent.height
)
var material = PhysicallyBasedMaterial()
material.baseColor.tint = UIColor(color)
let entity = ModelEntity(mesh: mesh, materials: [material])
entity.transform = Transform(matrix: anchor.originFromAnchorTransform)
return entity
}
}
Topic:
Spatial Computing
SubTopic:
ARKit
In several visionOS apps, we readjust our scenes to the user's eye level (their heads). But, we have encountered issues whereby the WorldTrackingProvider returns bad/incorrect positions for the first x number of frames.
See below code which you can copy paste in any Immersive Space. Relaunch the space and observe the numberOfBadWorldInfos value is inconsistent.
a. what is the most reliable way to get the devices's position?
b. is this indeed a bug?
c. are we using worldInfo improperly?
d. as a workaround, in our apps we set to 10 the number of frames to let pass before using worldInfo, should we set our threshold differently?
import ARKit
import Combine
import OSLog
import SwiftUI
import RealityKit
import RealityKitContent
let SUBSYSTEM = Bundle.main.bundleIdentifier!
struct ImmersiveView: View {
let logger = Logger(subsystem: SUBSYSTEM, category: "ImmersiveView")
let session = ARKitSession()
let worldInfo = WorldTrackingProvider()
@State var sceneUpdateSubscription: EventSubscription? = nil
@State var deviceTransform: simd_float4x4? = nil
@State var numberOfBadWorldInfos = 0
@State var isBadWorldInfoLoged = false
var body: some View {
RealityView { content in
try? await session.run([worldInfo])
sceneUpdateSubscription = content.subscribe(to: SceneEvents.Update.self) { event in
guard let pose = worldInfo.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else {
return
}
// `worldInfo` does not return correct values for the first few frames (exact number of frames is unknown)
// - known SO: https://stackoverflow.com/questions/78396187/how-to-determine-the-first-reliable-position-of-the-apple-vision-pro-device
deviceTransform = pose.originFromAnchorTransform
if deviceTransform!.columns.3.y < 1.6 {
numberOfBadWorldInfos += 1
logger.warning("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)")
} else {
if !isBadWorldInfoLoged {
logger.info("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)")
}
isBadWorldInfoLoged = true // stop logging.
}
}
}
}
}
I am trying to create an object in immersive space that is partially transparent (~50% opacity). I have implemented this in a few different ways including creating a model entity and setting its opacity component to 0.5, and creating a custom material with blending set to a transparent opacity of 0.5. These both work partially, as they behaved as intended for many cases, but seemingly randomly would act like occlusion material and block any other immersive content behind them, showing the real world instead.
Some notes: I am using RealityKit to render the semi-transparent object and an opaque object that is behind the semi-transparent object. I am using VisionOS 2.1, and am updating the location of the semi-transparent object often. Both objects are ModelEntities.
I would appreciate any guidance on how to implement this. Please let me know if there are any other questions.
My ARViewContainer code is not working. I don't know how to debug the issue and I don't know or see where my results is going to. I need help to resolve this issue. please help debug. See code below:
Hi all,
I'm running into an issue with an app that previously worked fine on device using visionOS 2.0. After updating to visionOS 26, the same code runs fine in the simulator but crashes on the device with the following error:
-[MTLDebugComputeCommandEncoder _validateThreadsPerThreadgroup:]:1330:
failed assertion `(threadsPerThreadgroup.width(32) * threadsPerThreadgroup.height(32) * threadsPerThreadgroup.depth(1))(1024) must be <= 832. (kernel threadgroup size limit)`
Is there any documented way to check or increase the allowed threadsPerThreadgroup size on Apple Vision Pro? Or any recommended workaround for this regression?
Thanks in advance!
Hello
RemoteDeviceIdentifier returns nil and therefore it crashes the HoverEffect sample project.
I have vision26 beta 2 on both devices
what the correct method of running this code sample ?
Hi, I called it "perspective problem", but I'm not quite sure what it is. I have a tag that I track with builtin camera. I calculate its pose, then use extrinsics and device anchor to calculate where to place entity with model.
When I place an entity that overlaps with physical object and start to look at it from different angles, the virtual object begins to move. Initially I thought that it's something wrong with calculations, or some image distortion closer to camera edges is affecting tag detection. To check, I calculated the position only once and displayed entity there, the physical tracked object is not moving. Now, when I move my head, so the object is more to the left, or right in my field of view, the virtual object becomes misaligned to the left, or right. It feels like a parallax effect, but distance from me to entity and to physical object are exactly the same.
Is that expected, because of some passthrough correction magic? And if so, can I somehow correct it back, so the entity always overlaps with object? I'm currently on v26 beta 5.
I also don't quite understand the camera extrinsics, because it seems that I need to flip it around X by 180 degrees to make it work in deviceAnchor * extrinsics.inverse * tag (shouldn't it be in same coordinates as all other RealityKit things?).
Is there any way to render a RealityView to an Image/UIImage like we used to be able to do using SCNView.snapshot() ?
ImageRenderer doesn't work because it renders a SwiftUI view hierarchy, and I need the currently presented RealityView with camera background and 3D scene content the way the user sees it
I tried UIHostingController and UIGraphicsImageRenderer like
extension View {
func snapshot() -> UIImage {
let controller = UIHostingController(rootView: self)
let view = controller.view
let targetSize = controller.view.intrinsicContentSize
view?.bounds = CGRect(origin: .zero, size: targetSize)
view?.backgroundColor = .clear
let renderer = UIGraphicsImageRenderer(size: targetSize)
return renderer.image { _ in
view?.drawHierarchy(in: view!.bounds, afterScreenUpdates: true)
}
}
}
but that leads to the app freezing and sending an infinite loop of
[CAMetalLayer nextDrawable] returning nil because allocation failed.
Same thing happens when I try
return renderer.image { ctx in
view.layer.render(in: ctx.cgContext)
}
Now that SceneKit is deprecated, I didn't want to start a new app using deprecated APIs.
My development team admin requested the Enterprise API for camera access on the vision pro. We got that granted, got a license for usage, and got instructions for integrating it with next steps.
We did the following:
Even when I try to download and run the sample project for "Accessing the Main Camera", and follow all the exact instructions mentioned here: https://developer.apple.com/documentation/visionos/accessing-the-main-camera
I am just unable to receive camera frames.
I added the capabilities, created a new provisioning profile with this access, added the entitlements to info.plist and entitlements, replaced the dummy license file with the one we were sent, and also have a matching bundle identifier and development certificate, but it is still not showing camera access for some reason.
"Main Camera Access" shows up in our Signing & Capabilities tab, and we also added the NSMainCameraDescription in the Info.plist and allow access while opening the app. None of this works. Not on my app, and not on the sample app that I just downloaded and tried to run on the Vision Pro after replacing the dummy license file.
I have a VideoMaterial inside a RealityView and want to attach this to a DockingRegion inside an immersive environment.
It appears that adding the VideoMaterial entity as a child of the docking region somewhat works, but there are no lighting effects (specular, diffuse) from the playing video.
So essentially, how can you add a VideoMaterial to a DockingRegion and achieve the same reflections/behavior as using AVPlayerViewController.
The latter is not an option as I need custom controls.