Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Is there a way to scale a RealityKit ShapeResource?
I can generate a ShapeResource from a ReakityKit entity's extents. Could I apply some scaling to the generated shape. Is there a way to do that? // model is a ModelResource and bounds is a BoundingBox var shape = ShapeResource.generateConvex(from: model.mesh); shape = shape.offsetBy(translation: bounds.center) // How can I scale the shape to fit within the bounds? The following API only provide the rotation and translation support. and I cannot find the scale support. offsetBy(rotation: simd_quatf = simd_quatf(ix: 0, iy: 0, iz: 0, r: 1), translation: SIMD3<Float> = SIMD3<Float>()) I can put the ShapeResource on an entity and scale the entity. But, I would like to know if it is possible to scale the ShapeResource itself without attaching it to an entity.
3
0
574
Feb ’25
How to get multiple animations into USDZ
Most models are only available as glb or fbx, so I usually reexport them into usdz using Blender. When I import them into Reality Composer Pro, Mesh, Textures etc look great, but in the Animation Library subsection all I can see is one default subtree animation. In Blender I can see all available animations and play them individually. The default subtree animation just plays the default idle animation. In fact when I open the nonlinear animation view in Blender and select a different animation as the default animation, the exported usdz shows the newly selected animation as default subtree animation. I can see in the Apple sample apps models can have multiple animations in their Animation Library. I'm using the latest Blender 4.5 and the usdz exporter should be working properly?
3
1
596
Oct ’25
Merge MeshAnchor from Scene Reconstruction for Vision Pro
Hi there, I'm trying to merge the mesh anchor into a single mesh, but couldn't find any resources on this. Here is the code where I make the mesh from each mesh anchor, and assigned it to a model component with a shader graph material. func run(_ sceneRec: SceneReconstructionProvider) async { for await update in sceneRec.anchorUpdates { switch update.event { case .added, .updated: // Get or create entity for this anchor let anchorEntity = anchors[update.anchor.id] ?? { let entity = ModelEntity() root?.addChild(entity) anchors[update.anchor.id] = entity return entity }() // Remove any existing children for child in anchorEntity.children { child.removeFromParent() } // Generate the mesh from the anchor guard let mesh = try? await MeshResource(from: update.anchor) else { return } guard let shape = try? await ShapeResource.generateStaticMesh(from: update.anchor) else { continue } print("Mesh added, vertices: \(update.anchor.geometry.vertices.count), bounds: \(mesh.bounds)") // Get the material to use var material: RealityKit.Material if isMaterialLoaded, let loadedMaterial = self.shaderMaterial { material = loadedMaterial } else { // Use a temporary material until the shader loads var tempMaterial = UnlitMaterial() tempMaterial.color = .init(tint: .purple.withAlphaComponent(0.5)) material = tempMaterial } await MainActor.run { anchorEntity.components.set(ModelComponent(mesh: mesh, materials: [material])) anchorEntity.setTransformMatrix(update.anchor.originFromAnchorTransform, relativeTo: nil) // Add collision component with static flag - required for spatial interactions anchorEntity.components.set(CollisionComponent( shapes: [shape], isStatic: true, filter: .default )) // Make entity interactive - enables spatial taps, drags, etc. anchorEntity.components.set(InputTargetComponent()) let shadowComponent = GroundingShadowComponent( castsShadow: true, receivesShadow: true ) anchorEntity.components.set(shadowComponent) } I then use a spatial tap gesture to set the position parameter in the shader graph material that creates a nice gradient from the tap position on the mesh to the rest of the mesh. SpatialTapGesture() .targetedToAnyEntity() .onEnded { value in let tappedEntity = value.entity // Check if the tapped entity is a child of tracking.meshAnchors if isChildOfMeshAnchors(entity: tappedEntity) { // Get local position (in the entity's coordinate space) let localPosition = value.location3D // Convert to world position (scene coordinate space) let worldPosition = value.convert(localPosition, from: .local, to: .scene) print("Tapped mesh anchor at local position: \(localPosition)") print("Tapped mesh anchor at world position: \(worldPosition)") // Update the material parameter with the tap position updateMaterialTapPosition(entity: tappedEntity, position: worldPosition) } else { print("Tapped entity is not a mesh anchor") } } } My issue is that because there are several mesh anchors, the gradient often gets cut off by the edge of the mesh generated from the mesh anchor as suppose to a nice continuous gradient across the entire scene reconstructed mesh I couldn't find any documentations on how to merge mesh from mesh anchors, any tips would be helpful! Thank you!
3
0
357
Mar ’25
RealityKit Entity ComponentSet does not conform to Sequence?
Hello, I'm trying to view the components of an Entity I'm creating in RealityKit by reading from a USDZ file. I have the following code snippet in my app. if let appleEntity = try? Entity.loadModel(named: "apple_tile") { let c = appleEntity.components for comp in c { // <- compiler error here print(comp) } } The compiler error I'm receiving says "For-in loop requires 'Entity.ComponentSet' to conform to 'Sequence'". However, I thought this was the case, according to the documentation for Entity.ComponentSet? Curious if anyone else has had this problem. Running XCode 15.4, and my Swift version is xcrun swift -version swift-driver version: 1.90.11.1 Apple Swift version 5.10 (swiftlang-5.10.0.13 clang-1500.3.9.4) Target: x86_64-apple-macosx14.0
3
0
411
Mar ’25
Human Body joint tracking in VisionOS
The goal is to achieve precise joint tracking for clinical assessment. The Doctor is wearing the AVP and observing the Patients movement. Do you have any recommended best practices for integrating real-time joint tracking and displaying them on the patient within visionOS? We attempted to use VNHumanBodyPose3DObservation, which theoretically should work, but we are unable to display the detected joints in an Immersive Space for real-time validation. This makes it difficult for the doctor to ensure accurate tracking and if possible a photo or video of the Range of Motion assessment would be needed for the patient record. Are there alternative methods to achieve precise real-time joint tracking without requiring main camera access (com.apple.developer.arkit.main-camera-access.allow)?
3
0
300
Mar ’25
Rendering scene in RealityView to an Image
Is there any way to render a RealityView to an Image/UIImage like we used to be able to do using SCNView.snapshot() ? ImageRenderer doesn't work because it renders a SwiftUI view hierarchy, and I need the currently presented RealityView with camera background and 3D scene content the way the user sees it I tried UIHostingController and UIGraphicsImageRenderer like extension View { func snapshot() -> UIImage { let controller = UIHostingController(rootView: self) let view = controller.view let targetSize = controller.view.intrinsicContentSize view?.bounds = CGRect(origin: .zero, size: targetSize) view?.backgroundColor = .clear let renderer = UIGraphicsImageRenderer(size: targetSize) return renderer.image { _ in view?.drawHierarchy(in: view!.bounds, afterScreenUpdates: true) } } } but that leads to the app freezing and sending an infinite loop of [CAMetalLayer nextDrawable] returning nil because allocation failed. Same thing happens when I try return renderer.image { ctx in view.layer.render(in: ctx.cgContext) } Now that SceneKit is deprecated, I didn't want to start a new app using deprecated APIs.
3
0
1.1k
Sep ’25
Perspective problem
Hi, I called it "perspective problem", but I'm not quite sure what it is. I have a tag that I track with builtin camera. I calculate its pose, then use extrinsics and device anchor to calculate where to place entity with model. When I place an entity that overlaps with physical object and start to look at it from different angles, the virtual object begins to move. Initially I thought that it's something wrong with calculations, or some image distortion closer to camera edges is affecting tag detection. To check, I calculated the position only once and displayed entity there, the physical tracked object is not moving. Now, when I move my head, so the object is more to the left, or right in my field of view, the virtual object becomes misaligned to the left, or right. It feels like a parallax effect, but distance from me to entity and to physical object are exactly the same. Is that expected, because of some passthrough correction magic? And if so, can I somehow correct it back, so the entity always overlaps with object? I'm currently on v26 beta 5. I also don't quite understand the camera extrinsics, because it seems that I need to flip it around X by 180 degrees to make it work in deviceAnchor * extrinsics.inverse * tag (shouldn't it be in same coordinates as all other RealityKit things?).
3
0
237
Aug ’25
When placing a TextField within a RealityViewAttachment, the virtual keyboard does not appear in front of the user as expected.
Hello, Thank you for your time. I have a question regarding visionOS app development. When placing a SwiftUI TextField inside RealityView.attachments, we found that focusing on the field does not bring up the virtual keyboard in front of the user. Instead, the keyboard appears around the user’s lower abdomen area. However, when placing the same TextField in a regular SwiftUI layer outside of RealityView, the keyboard appears in the correct position as expected. This suggests that the issue is specific to RealityView.attachments. We are currently exploring ways to have the virtual keyboard appear directly in front of the user when using TextField inside RealityViewAttachments. If there is any method to explicitly control the keyboard position or any known workarounds—including alternative UI approaches—we would greatly appreciate your guidance. Best regards, Sadao Tokuyama
3
1
625
Jul ’25
Debug Vision Pro application directly on physical device instead of the simulator
I have Mac mini M4 with 16GB memory, the Xcode is 16.1, when I test my Vision Pro App with the Simulator, it is very slow and system shows the memory is under the high pressure. How do I run/test/debug the application on Vision Pro directly? Tried to add my Vision Pro to my developer account, it didn't work due to cannot find UDID, when I hook the USB to the battery, it only shows Battery device ID.
3
0
484
Jan ’25
RealityView and Persistent World Data?
I was watching the Developer videos, and there was mention that RealityView handles persistent world data differently and also automatically for us. I am having an issue finding the material I need to get up to speed on that. In ARKit, I was able to place a model with the world data and recall that .map data. It even stored a reference image for the scene to help match the world data. I'm looking for the information on how to implement and work with those same features with RealityView, as it seems to be better/automatically integrated? I need help being pointed in the right direction. Sample code would be amazing.
3
0
571
Feb ’25
MainActor attribute on RealityKit APIs is causing problems
Hello, A lot of the RealityKit APIs (Ex. LowLevelMesh, LowLevelTexture, etc.) are marked with MainActor so they needed to be accessed on the main thread. This creates issues when we need to perform expensive GPU related operations since now we need to perform those on the main thread. This results in bottlenecks and hangs in our application. We would like to use a multi-threaded approach to solve these problems which is difficult to do here. We are constantly streaming data whether the app is just appearing or the user is interacting with our application so we need to be able to perform these operations on a separate thread. Any advice on how to achieve this using RealityKit? Thank you.
3
8
204
Mar ’25
Reality View argument type does not conform to protocol view
I'm working on creating a panorama view in AVP. When I got to this line of code Xcode says that "Type 'Entity' does not conform to protocol 'View'": private var realityView: RealityView! as well as this line, with the same error message: private func setupPanoramaScene(for content: RealityView.Content) What should I put as a argument for reality view? It doesn't work without arguments either.
3
0
485
Jan ’25
Human pose detection failing on Vision Pro
Hi! I attempted to run a sample project for detecting human pose in photos, which can be found here: https://developer.apple.com/documentation/vision/detecting-human-body-poses-in-3d-with-vision The project works perfectly when run on my Macbook Pro M1, but it fails on Apple Vision Pro. After selecting the photo an endless loading screen is presented and the following output is produced in the console: Failed to initialize 2D Detection Algorithm. Failed to initialize 2D Pose Estimation Algorithm. Failed to initialize algorithm modules Network path is nil: (null) Failed to initialize 2D Detection Algorithm. Failed to initialize 2D Pose Estimation Algorithm. Failed to initialize algorithm modules Unable to perform the request: Error Domain=com.apple.Vision Code=9 "Async status object reported as failed but without an error" UserInfo={NSLocalizedDescription=Async status object reported as failed but without an error}. de-activating session 70138 after timeout It seems that VNDetectHumanBodyPose3DRequest is failing on Vision Pro for some reason. Are there any additional requirements for running Vision framework on VisionOS, that I might be missing?
3
1
174
Mar ’25
Loading USDZ asset into Model3D causes visionOS 2.0 beta 5 to crash
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug. After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere. This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
3
1
565
Jul ’25
Unable to Create a Fully Immersive Experience That Hides Other Windows in visionOS App
Description: I'm developing a travel/panorama viewing app for visionOS that allows users to view 360° panoramic images in an immersive space. When users enter panorama viewing mode, I want to provide a fully immersive experience where the main interface window and Earth 3D globe window are hidden. I've implemented the app following Apple's documentation on Creating Fully Immersive Experiences, but when users enter the immersive space, both the main window and the Earth 3D window remain visible, diminishing the immersive experience. Implementation Details: My app has three main components: A main content window showing panorama thumbnails A 3D globe window (volumetric) showing locations An immersive space for viewing 360° panoramas I'm using .immersionStyle(selection: $panoImageView, in: .full) to create a fully immersive experience, but other windows remain visible. Relevant Code: @main struct Travel_ImmersiveApp: App { @StateObject private var appModel = AppModel() @State private var panoImageView: ImmersionStyle = .full var body: some Scene { WindowGroup { ContentView() .environmentObject(appModel) } .windowStyle(.automatic) .defaultSize(width: 1280, height: 825) WindowGroup(id: "Earth") { Globe3DView() .environmentObject(appModel) .onAppear { appModel.isGlobeWindowOpen = true appModel.globeWindowOpen = true } .onDisappear { if !appModel.shouldCloseApp { appModel.handleGlobeWindowClose() } } } .windowStyle(.volumetric) .defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters) .windowResizability(.contentSize) ImmersiveSpace(id: "ImmersiveView") { ImmersiveView() .environmentObject(appModel) } .immersionStyle(selection: $panoImageView, in: .full) } } Opening the Immersive Space: func getPanoImageAndOpenImmersiveSpace() async { appModel.clearMemoryCache() do { let canView = appModel.canViewImage(image) if canView { let downloadedImage = try await appModel.getPanoramaImage(for: image) { progress in Task { @MainActor in cardState = .loading(progress: progress) } } await MainActor.run { appModel.updateCurrentImage(image, panoramaImage: downloadedImage) } if !appModel.immersiveSpaceOpened { try await openImmersiveSpace(id: "ImmersiveView") await MainActor.run { appModel.immersiveSpaceOpened = true cardState = .normal } } else { await MainActor.run { appModel.updateImmersiveView = true cardState = .normal } } } else { await MainActor.run { appModel.errorMessage = "You do not have permission to view this image." cardState = .normal } } } catch { // Error handling } } Immersive View Implementation: struct ImmersiveView: View { @EnvironmentObject var appModel: AppModel var body: some View { RealityView { content in let rootEntity = Entity() content.add(rootEntity) Task { if let selectedImage = appModel.selectedImage, appModel.canViewImage(selectedImage) { await loadPanorama(for: rootEntity) } } } update: { content in if appModel.updateImmersiveView, let selectedImage = appModel.selectedImage, appModel.canViewImage(selectedImage), let rootEntity = content.entities.first { Task { await loadPanorama(for: rootEntity) appModel.updateImmersiveView = false } } } .onAppear { print("ImmersiveView appeared") } .onDisappear { appModel.resetImmersiveState() } } // loadPanorama implementation... } What I've Tried Set immersionStyle to .full as recommended in the documentation Confirmed that the immersive space is properly opened and displaying panoramas Verified that the state management for the immersive space is working correctly Questions How can I ensure that when the user enters the immersive panorama viewing experience, all other windows (main interface and Earth 3D globe) are automatically hidden? Is there a specific API or approach I'm missing to properly implement a fully immersive experience that hides all other windows? Do I need to manually dismiss the windows when opening the immersive space, and if so, what's the best approach for doing this? Any guidance or sample code would be greatly appreciated. Thank you!
3
0
140
Apr ’25
VisionOS 26 - threadsPerThreadgroup limit causing crash on device (but not in simulator)
Hi all, I'm running into an issue with an app that previously worked fine on device using visionOS 2.0. After updating to visionOS 26, the same code runs fine in the simulator but crashes on the device with the following error: -[MTLDebugComputeCommandEncoder _validateThreadsPerThreadgroup:]:1330: failed assertion `(threadsPerThreadgroup.width(32) * threadsPerThreadgroup.height(32) * threadsPerThreadgroup.depth(1))(1024) must be <= 832. (kernel threadgroup size limit)` Is there any documented way to check or increase the allowed threadsPerThreadgroup size on Apple Vision Pro? Or any recommended workaround for this regression? Thanks in advance!
3
0
163
Jun ’25
Exporting .reality files from Reality Composer Pro
I've been using the MacOS XCode Reality Composer to export interactive .reality files that can be hosted on the web and linked to, triggering QuickLook to open the interactive AR experience. That works really well. I've just downloaded XCode 15 Beta which ships with the new Reality Composer Pro and I can't see a way to export to .reality files anymore. It seems that this is only for building apps that ship as native iOS etc apps, rather than that can be viewed in QuickLook. Am I missing something, or is it no longer possible to export .reality files? Thanks.
3
2
2.0k
Jul ’25