Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

VisionOS 2.0 Main Camera Access Enterprise Entitlement Not Recognized in XCode
I am working on a project that requires access to the main camera on the Vision Pro. My main account holder applied for the necessary enterprise entitlement and we were approved and received the Enterprise.license file by email. I have added the Enterprise.license file to my project, and manually added the com.apple.developer.arkit.main-camera-access.allow entitlement to the entitlement file and set it to true since it was not available in the list when I tried to use the + Capability button in the Signing & Capabilites tab. I am getting an error: Provisioning profile "iOS Team Provisioning Profile: " doesn't include the com.apple.developer.arkit.main-camera-access.allow entitlement. I have checked the provisioning profile settings online, and there is no manual option for adding the main camera access entitlement, and it does not seem to be getting the approval from the license.
6
0
1.6k
Sep ’25
Force quit apps on Vision Pro simulator
How do you force quit apps on the Vision Pro simulator? I've read online you can double press the digital crown on a real Vision Pro device but there is no such button on the simulator. So far I've tried Pressing the home button twice (⇧⌘H) Pressing the Siri button twice (⌥⇧⌘H) None of them works I'm on Xcode 15.0 beta 5 (15A5209g) & visionOS 1.0 beta 2 (21N5207e)
6
2
3.1k
Feb ’25
Manipulation stops working when changing rooms
This post documents an issue I reported in feedback FB19610114 and see if anyone knows of a workaround. Here is a copy of the feedback. Short version Manipulation (SwiftUI OR RealityKit) fails to translate entities after changing rooms. By changing rooms, I mean a human wearing an Apple Vision Pro leaving one room and entering another room. Once this issue occurs, it impacts all apps that use these features. A device restart is the only solution I have to fix it. Feedback FB19610114 This is an odd one. I'm using the new Manipulation Component in visionOS 26. Most of the time this works well. Sometime it stops working and when it does the only way to get it working again is to reboot the headset. When this happens, I can continue to rotate and scale items, but translation no longer works. It is as if the item is stuck to a fixed point in the parent scene (window, volume, etc). When this bug occurs, it affects every app across the entire operating system that is using manipulation, including the RealityKit component AND the SwiftUI version. This is not limited to one app and is not limited to apps that I am working on. Once this error occurs, it affects literally any application across the operating system that is using this API, including apps from Apple. I won't speculate on the cause of this, but I do know of one way where I can always get it to happen. Here is how to reproduce it: Make an Xcode project with a single entity that uses the Manipulation Component. There is no need to customize the configuration of this component. The default implementation will work. Build and run this app on device. You can keep running from device or quit and launch the app like normal on device. Open the app and manipulate the entity - it should work as expected. Physically walk into another room. It is vital that you leave the current room that you are in and enter a different room entirely. Use the digital crown to recenter your view and bring your window or volume to you. Test the manipulation on the entity again - it should still be working as expected at this point. Physically, move yourself and your headset into the original room where you started. Use the digital crown to recenter your view and bring your window or volume to you. Test the manipulation on the entity again - you should now see the issue. When I follow the steps above, then 100% of the time manipulation translation stops working at this point. It will impact any application using this API. The only way to fix it is to restart my headset. A few points to keep in mind It does not matter if an app is actively being run from Xcode. When this occurs, it impacts every single app, not just one. When this occurs, rotation and scaling continue to work, but the entity/view cannot be translated. This impacts BOTH the SwiftUI version and the RealityKit version. When this occurs, the only way to "fix" it is to reboot the device.
6
3
909
1w
Scene's origin relative to portal's window?
I am experimenting with RealityKit to set up a portal. Everything works, but I was wondering where the scene's origin is with respect to the front of the portal window? From experiments, the origin's X and Y appear to be at the center of the portal window, while the origin's Z appearing to be about a meter behind the portal window. Is this (at least roughly) correct? Is it documented anywhere? PS. I began with the standard visionOS app and edited the Reality Composer Pro file to create the scene.
5
0
655
Mar ’25
Build Vision Pro failed
`error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults") error: Tool exited with code 1
5
0
677
Jul ’25
Photogrammetry Session - new model?
Hi Apple Team, We noticed the following exciting changelog in the latest macOS 26 beta: A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451) However after trying this on the latest beta and running some tests we do not see any differences on objects with low textures such as single coloured surfaces. Is there anything we are missing? the machine is definitely connected to the internet but we have no way of knowing from the logs if the new model is being used? thanks
5
1
625
Jul ’25
ARSkeleton3D modelTransform always return nil
I use ARKit for motion tracking. I get the skeleton joint coordinates and use them for animation. I didn't make any changes to the code, but I updated the iOS version from 18 to 26, and modelTransform now always returns nil. https://developer.apple.com/documentation/arkit/arskeleton3d/modeltransform(for:) For example bodyAnchor.skeleton.modelTransform(for: .init(rawValue: "head_joint")) bodyAnchor is ARBodyAnchor. I see the default skeleton on the screen, but now I can't get the coordinates out of it. I'm using an example from Apple's WWDC presentation. https://developer.apple.com/documentation/arkit/capturing-body-motion-in-3d Are there any changes in the API? Or just bug?
5
0
781
Nov ’25
Developer Strap Gen 2 - Only USB2 Speeds
I am testing out the Gen 2 of the developer strap on my Vision Pro M2 and I have only been able to get USB 2 speeds when connecting it to my MacBook Pro Max M3. I used the official Apple Thunderbolt 4 cable, which does get Thunderbolt speeds on my T7 Touch drive. Has anyone figured out a solution for this issue? The Gen 2 developer strap does advertise 20 Gb/s speeds.
5
3
1.3k
Nov ’25
RC Pro Timeline Notification Not Received in Xcode
I'm having a heck of a time getting this to work. I'm trying to add an event notification at the end of a timeline animation to trigger something in code but I'm not receiving the notification from RC Pro. I've watched that Compose Interactive 3D Content video quite a few times now and have tried many different ways. RC Pro has the correct ID names on the notifications. I'm not a programmer at all. Just a lowly 3D artist. Here is my code... import SwiftUI import RealityKit import RealityKitContent extension Notification.Name { static let button1Pressed = Notification.Name("button1pressed") static let button2Pressed = Notification.Name("button2pressed") static let button3Pressed = Notification.Name("button3pressed") } struct MainButtons: View { @State private var transitionToNextSceneForButton1 = false @State private var transitionToNextSceneForButton2 = false @State private var transitionToNextSceneForButton3 = false @Environment(AppModel.self) var appModel @Environment(\.dismissWindow) var dismissWindow // Notification publishers for each button private let button1PressedReceived = NotificationCenter.default.publisher(for: .button1Pressed) private let button2PressedReceived = NotificationCenter.default.publisher(for: .button2Pressed) private let button3PressedReceived = NotificationCenter.default.publisher(for: .button3Pressed) var body: some View { ZStack { RealityView { content in // Load your RC Pro scene that contains the 3D buttons. if let immersiveContentEntity = try? await Entity(named: "MainButtons", in: realityKitContentBundle) { content.add(immersiveContentEntity) } } // Optionally attach a gesture if you want to debug a generic tap: .gesture( TapGesture().targetedToAnyEntity().onEnded { value in print("3D Object tapped") _ = value.entity.applyTapForBehaviors() // Do not post a test notification here—rely on RC Pro timeline events. } ) } .onAppear { dismissWindow(id: "main") // Remove any test notification posting code. } // Listen for distinct button notifications. .onReceive(button1PressedReceived) { (output) in print("Button 1 pressed notification received") transitionToNextSceneForButton1 = true } .onReceive(button2PressedReceived.receive(on: DispatchQueue.main)) { _ in print("Button 2 pressed notification received") transitionToNextSceneForButton2 = true } .onReceive(button3PressedReceived.receive(on: DispatchQueue.main)) { _ in print("Button 3 pressed notification received") transitionToNextSceneForButton3 = true } // Present next scenes for each button as needed. For example, for button 1: .fullScreenCover(isPresented: $transitionToNextSceneForButton1) { FacilityTour() .environment(appModel) } // You can add additional fullScreenCover modifiers for button 2 and 3 transitions. } }
5
0
448
Sep ’25
Request File Access from Unity for Apple Vision Pro
Hi, I am trying to load files from the Apple Vision Pro's storage into a Unity App (using Apple visionOS XR Plugin and not PolySpatial package). So far, I've tried using UnitySimpleFileBrowser and UnityStandaloneFileBrowser (both aren't made for the Vision Pro and don't work there), and then implemented my own naive file browser that at least allows me to view directories (that I can see from the App Sandbox). This is of course very limited: Gray folders can't be accessed, the only 3 available ones don't contain anything where a user would put files through the "Files" app. I know that an app can request access to these "Files & Folders": So my question is: Is there a way to request this access for a Unity-built app at the moment? If yes, what do I need to do? I've looked into the generated Xcode project's "Capabilities", but did not find anything related to file access. Any help is appreciated!
5
0
374
Oct ’25
Hidden window/volume system overlays in Full Space
When I show a window while a sky sphere is shown, the handles to drag/close/resize the window are hidden. The colliders still work, so they are there, but only the visuals are hidden. I already know from another project, that this also happens to volumes. They only appear once you get closer to the window or if the sky sphere gets removed. Is this a known issue or is there a fix for that? .persistentSystemOverlays(.visible)does not fix it Xcode 16.3.0 Beta, visionOS 2.4
5
0
357
Mar ’25
Digital Crown press when both immersive space and additional windows are presented
I have been experimenting with the Hello World sample app from https://developer.apple.com/documentation/visionos/world and I came across behavior that appears inconsistent with user-facing documentation describing the device controls at https://support.apple.com/en-gb/guide/apple-vision-pro/tan1e2a29e00/visionos I tried pressing simulator's "Home" button while "Objects in Orbit" immersive space was presented alongside with the main application window. According to user documentation, pressing Digital Crown should take the user directly to Home View. In my test a single press only dismissed the immersive space, I needed another press to "exit" the app and go to Home View. Is this behavior expected? I am assuming that "Home" button in the simulator behaves as if the user pressed Digital Crown on the device, I don't have access to the actual hardware.
5
0
412
Apr ’25
WWDC 25 RemoteImmersiveSpace - Support for Passthrough Mode? RealityKit?
This is related to the WWDC presentation, What's new in Metal rendering for immersive apps.. Specifically, the macOS spatial streaming to visionOS feature: For reference: the page in the docs. The presentation demonstrates it using a full immersive space and Metal rendering using compositor services. I'd like clarity on a few things: Is the remote device wireless, or must the visionOS device be connected via a wired connected? Is there a limit to the number of remote devices, and if not, could macOS render different things per remote device simultaneously? Can I also use mixed mode with passthrough enabled, instead of just a fully-immersive mode? Can I use RealityKit instead of Metal? If so, may I have an example, or would someone point to an example?
5
0
598
Sep ’25
RCP Scene issues at runtime (visionOS 26 / Xcode 26 Beta 4)
I have a scene that has been assembled in RCP but I'm losing the correct hierarchy and transforms when running the scene in the headset or the simulator. This is in RCP: This is at runtime with the debugger: As you can see the "MAIN_WAGON" entity is gone and part of the hierarchy are now children of "TRAIN_ROOT" instead. Another issue is that not only part of the hieararchy disappears, it also reverts back to default values of the transform instead of what is set in RCP: This is in RCP: This is in the simulator/headset: I'm filing a feedback ticket too and will post the number here. Anyone had a similar issue and found a fix or workaround ?
5
0
284
Aug ’25
Portal crossing causes inconsistent lighting and visual artifacts between virtual and real spaces (visionOS 2.0)
Hello, I'm working with the new PortalComponent introduced in visionOS 2.0, and I've encountered some issues when transitioning entities between virtual and real-world spaces using crossingMode. Specifically: Lighting inconsistency: When CG content (ModelEntities with PhysicallyBasedMaterial) crosses the portal from virtual space into the real environment, the way light reflects on the objects changes noticeably. This causes a jarring visual effect, as the same material appears differently depending on the space it's in. Unnatural transition visuals: During the transition, the CG models often appear to "emerge from the wall," especially when crossing from virtual to real. This ruins the immersive illusion and feels visually unnatural. IBL adjustment attempts: I’ve tried adding an ImageBasedLightComponent to the world entity, and while it slightly improves the lighting consistency, the issue still remains to a noticeable degree. My goal is to create a seamless visual experience when CG entities cross between spaces, without sudden lighting shifts or immersion-breaking geometry reveals. Has anyone else experienced similar issues? Is there a recommended setup or workaround to better control lighting and visual fidelity when using crossingMode with portals in visionOS 2.0? Any guidance would be greatly appreciated. Thank you!
5
0
281
Jul ’25
A question about adding grounding shadow in visionPro
I want adding grounding shadow on my Entity in RealityView on visionPro. However it seems that the shadow can only appear on another Entity. So I using plane detection in ARKit and add a transparent plane on it to render shadow. let planeEntity = ModelEntity(mesh: .generatePlane(width: anchor.geometry.extent.width, height: anchor.geometry.extent.height), materials: [material]) planeEntity.components.set(OpacityComponent(opacity: 0.0)) But sometimes there will be a border around my Entityon the plane. I do not know why it will happen, and I want remove the border.
5
0
427
Mar ’25
Alternatives to SceneView
Hey there, since SceneView has been marked as „deprecated“ for SwiftUI, I‘m wondering which alternatives should be considered for the following situation: I have a SwiftUI app (for iOS and iPadOS) where users can view (with rotate, scale, move gestures) 3D models (USDZ) in a scene. The models will be downloaded from web backend and called via local URL paths. What I tested: I‘ve tried ARView in .nonAR mode, RealityView, however I didn‘t get the expected response -> User can rotate, scale the 3D models in a virtual space. ARView in nonAR mode still shows the object like in normal AR mode without camera stream. I tried to add Gestures to the RealityView on iOS - loading USDZ 3D models worked but the gestures didn’t). Model3D is only available for visionOS (that would be amazing to have it for iOS) I also checked QuickLook Preview however it works pretty strange via Filepicker etc, which is not the way how the user should load the 3D models in my app. Maybe I missed something, I couldn’t find anything which can help me. I‘m pretty much stucked adopting the latest and greatest frameworks/APIs in my App and taking the next steps porting my app to visionOS. Long story short 😃: Does someone have an idea what is the alternative to SceneView for USDZ 3D models? I appreciate your support!! Thanks in advance!
4
0
216
Jul ’25