Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Spatial Gallery App functionality
Similar to the visionOS Spatial Gallery app, I'm developing a visionOS app that will show spatial photos and videos. Is it possible to re-create the horizontal (or a vertical) scrolling functionality that shows spatial photos and spatial video previews? Does the Spatial Gallery app use private APIs to create this functionality? I've been looking at the Quick Look documentation and have been able to use the PreviewApplication to show a single preview, but do not see anything for a collection of files as the Spatial Gallery app presents in the scrolling view. Any insights or direction on how this may be done is greatly appreciated.
2
0
109
Jun ’25
Looking for a way to implement the video display effect in Apple's 'Spatial Gallery'
Hi guys, I noticed that Apple created a really engaging visual effect for browsing spatial videos in the app. The video appears embedded in glass panel with glowing edges and even shows a parallax effect as you move around. When I tried to display the stereo video using RealityView, however, the video entity always floats above the panel. May I ask how does VisionOS implement this effect? Is there any approach to achieve this effect or example code I can use in my own code. Thanks!
3
1
176
Jun ’25
I need to loop my videoMaterial.
I need to loop my videoMaterial and I don't know how to make it happen in my code. I have included an image of my videoMaterial code. Any help making this happen with be greatly appreciated. Thank you, Christopher
1
0
88
Jun ’25
Entity jittering when using hand anchor
Hello! I'm a developer of Ping Pong for Apple Vision Pro (Ping Pong Club). I'm experiencing an issues with collisions between ball and paddle. Sometimes the ball just unexpectedly flying away like it has been hit to hard. I believe this is happening because hand anchors are jittering (ping pong paddle has hand anchor component) and at the time of collision unnecessary micro motion produced. I've build a demo app to demonstrate that: https://drive.google.com/file/d/1gBh-DsXuVZdvDrrD7gJcbJ_hg7tvVKUV/view?usp=share_link Video: https://drive.google.com/file/d/1poPbsJdBz87edqrWIfBt1eSVzfm5CTek/view?usp=sharing Is there a way to add some threshold to prevent jittering?
0
0
67
Jun ’25
Metal (Compositor Services) or RealityKit on visionOS
I am develop visionOS app. I am now very interested in Metal and Compositor Services, but I have not explored them in depth. I know that Metal has a higher degree of control freedom. I am wondering if using Compositor Services will have fewer functions than RealityKit in AR technology (such as scene reconstruction and understanding, hover effect, etc.).
4
0
153
Jun ’25
Creating a voxel mesh and render it using metal within a RealityKit ImmersiveView
Hi everyone, I'm creating an educational App that allows doing computational design in an immersive environment with the Vision Pro. The App is free and can be found here: https://apps.apple.com/us/app/arcade-topology/id6742103633 The problem I have is that the mesh of voxels I currently create use ModelEntity and I recently read that this is horrible for scalability. I already start to see issues when I try to use thousands of voxels. I also read somewhere that I should then take advantage of GPUs and use metal to that end. I was wondering if someone could point me to a tutorial or article that discusses this. In essence, I need to create a 3D voxel mesh, and those voxels have to update their opacity within an iterative loop. Thanks! —Alejandro
3
0
99
Jun ’25
Issue: Closing Bounded Volume Never Re-Opens
Greetings. I am having this issue with a Unity Polyspatial VisionOS app. We have our main Bounded Volume for our app. We have other Native UI windows that appear when we interact with objects in our Bounded Volume. If a user closes our main Bounded Volume...sometimes it quits the app. Sometimes it doesn't. If we go back to the home screen and reopen the app, our main Bounded Volume doesn't always appear, and just the Native UI windows we left open are visible. But, we can sometimes still hear sounds that are playing in our Bounded Volume. What solutions are there to make sure our Bounded Volume always appears when the app is open?
1
0
83
Jun ’25
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS Problem Description I'm developing a Vision Pro app that displays 360° panoramic photos in a full immersive space. I have a floating menu that auto-hides after 5 seconds, and I want users to be able to show the menu again using spatial gestures (particularly pinch gestures) when it's hidden. However, the SpatialEventGesture implementation is not working as expected. The menu doesn't appear when users perform pinch gestures or other spatial interactions in the immersive space. Current Implementation Here's the relevant gesture detection code in my ImmersiveView: import SwiftUI import RealityKit struct ImmersiveView: View { @EnvironmentObject var appModel: AppModel @Environment(\.openWindow) private var openWindow var body: some View { RealityView { content in // RealityView content setup with panoramic sphere... let rootEntity = Entity() content.add(rootEntity) // Load panoramic content here... } // Using SpatialEventGesture to handle multiple spatial gestures .gesture( SpatialEventGesture() .onEnded { eventCollection in // Check menu visibility state if !appModel.isPanoramaMenuVisible { // Iterate through event collection to handle various gestures for event in eventCollection { switch event.kind { case .touch: print("Detected spatial touch gesture, showing menu") showMenuWithGesture() return case .indirectPinch: print("Detected spatial pinch gesture, showing menu") showMenuWithGesture() return case .pointer: print("Detected spatial pointer gesture, showing menu") showMenuWithGesture() return @unknown default: print("Detected unknown spatial gesture: \(event.kind)") showMenuWithGesture() return } } } } ) // Keep long press gesture as backup .simultaneousGesture( LongPressGesture(minimumDuration: 1.5) .onEnded { _ in if !appModel.isPanoramaMenuVisible { print("Detected long press gesture, showing menu") showMenuWithGesture() } } ) } private func showMenuWithGesture() { if !appModel.isPanoramaMenuVisible { appModel.showPanoramaMenu() if !appModel.windowExists(id: "PanoramaMenu") { openWindow(id: "PanoramaMenu", value: "menu") } } } } What I've Tried Multiple SpatialTapGesture approaches: Originally tried using multiple .gesture() modifiers with SpatialTapGesture(count: 1) and SpatialTapGesture(count: 2), but realized they override each other. SpatialEventGesture implementation: Switched to SpatialEventGesture to handle multiple event types (.touch, .indirectPinch, .pointer), but pinch gestures still don't trigger the menu. Added debugging: Console logs show that the gesture callbacks are never called when performing pinch gestures in the immersive space. Backup LongPressGesture: Added a simultaneous long press gesture as backup, which also doesn't work consistently. Expected Behavior When the panorama menu is hidden (after 5-second auto-hide), users should be able to: Perform a pinch gesture (indirect pinch) to show the menu Tap in space to show the menu Use other spatial gestures to show the menu Questions Is SpatialEventGesture the correct approach for detecting gestures in a full immersive RealityView? Are there any special considerations for gesture detection when the RealityView contains a large panoramic sphere that might be intercepting gestures? Should I be using a different gesture approach for visionOS immersive spaces? Is there a way to ensure gestures work even when the RealityView content (panoramic sphere) might be blocking them? Environment Xcode 16.1 visionOS 2.5 Testing on Vision Pro device App uses SwiftUI + RealityKit Any guidance on the proper way to implement spatial gesture detection in visionOS immersive spaces would be greatly appreciated! Additional Context The app manages multiple windows and the gesture detection should work specifically when in the immersive panorama mode with the menu hidden. Thank you for any help or suggestions!
1
0
143
Jun ’25
RealityView camera feed not shown
I have two RealityView: ParentView and When click the button in ParentView, ChildView will be shown as full screen cover, but the camera feed in ChildView will not be shown, only black screen. If I show ChildView directly, it works with camera feed. Please help me on this issue? Thanks. import RealityKit import SwiftUI struct ParentView: View{ @State private var showIt = false var body: some View{ ZStack{ RealityView{content in content.camera = .virtual let box = ModelEntity(mesh: MeshResource.generateSphere(radius: 0.2),materials: [createSimpleMaterial(color: .red)]) content.add(box) } Button("Click here"){ showIt = true } } .fullScreenCover(isPresented: $showIt){ ChildView() .overlay( Button("Close"){ showIt = false }.padding(20), alignment: .bottomLeading ) } .ignoresSafeArea(.all) } } import ARKit import RealityKit import SwiftUI struct ChildView: View{ var body: some View{ RealityView{content in content.camera = .spatialTracking } } }
0
0
127
Jun ’25
Launching a Unity fully immersive game from SwiftUI
I am trying to launch a fully immersive game from Unity on a SwiftUI view. The game is using Metal Rendering with Compositor Services. I added the unity Xcode project into the workspace, added the necessary bridge code. When I click on the button to call ufw?.showUnityWindow(), it does not start and I get the following in the console: AR session failed to start after 5 seconds. Is the app configured to use an immersive space?
2
0
88
Jun ’25
Any recommended content-aware compression strategy for .ktx textures in Reality Composer Pro?
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets. I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity. In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging: Low-complexity images are compressed more aggressively The final packaged file size varies based on content complexity Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment. Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro? Or any best practices to optimize .ktx sizes based on image complexity? Thanks!
0
0
79
Jun ’25
What is the first reliable position of the apple vision pro device?
In several visionOS apps, we readjust our scenes to the user's eye level (their heads). But, we have encountered issues whereby the WorldTrackingProvider returns bad/incorrect positions for the first x number of frames. See below code which you can copy paste in any Immersive Space. Relaunch the space and observe the numberOfBadWorldInfos value is inconsistent. a. what is the most reliable way to get the devices's position? b. is this indeed a bug? c. are we using worldInfo improperly? d. as a workaround, in our apps we set to 10 the number of frames to let pass before using worldInfo, should we set our threshold differently? import ARKit import Combine import OSLog import SwiftUI import RealityKit import RealityKitContent let SUBSYSTEM = Bundle.main.bundleIdentifier! struct ImmersiveView: View { let logger = Logger(subsystem: SUBSYSTEM, category: "ImmersiveView") let session = ARKitSession() let worldInfo = WorldTrackingProvider() @State var sceneUpdateSubscription: EventSubscription? = nil @State var deviceTransform: simd_float4x4? = nil @State var numberOfBadWorldInfos = 0 @State var isBadWorldInfoLoged = false var body: some View { RealityView { content in try? await session.run([worldInfo]) sceneUpdateSubscription = content.subscribe(to: SceneEvents.Update.self) { event in guard let pose = worldInfo.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else { return } // `worldInfo` does not return correct values for the first few frames (exact number of frames is unknown) // - known SO: https://stackoverflow.com/questions/78396187/how-to-determine-the-first-reliable-position-of-the-apple-vision-pro-device deviceTransform = pose.originFromAnchorTransform if deviceTransform!.columns.3.y < 1.6 { numberOfBadWorldInfos += 1 logger.warning("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)") } else { if !isBadWorldInfoLoged { logger.info("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)") } isBadWorldInfoLoged = true // stop logging. } } } } }
3
0
84
May ’25
Custom Room Names Saved After Merger?
I am allowing users to go through and capture different rooms, and add a custom label to that room. Is there a way to store data about this in the captured room so that it persists into the final merge? As it is now, My users mark all their merges with custom labels, but after merging there is no way to remember which room is which in the merging process so they have to go through and manually add the labels back. For larger floor plans this is not ideal.
0
0
60
May ’25
white gap between objects in RealityView
I want to display a huge image in RealityView in 3d space on Vision Pro. of course instead of one giant file I'm using a lot of big images. to achieve this, I'm generating multiple planes exactly beside each others and put each image on them. although the planes are exactly beside each others but there is still a white gap between them.(image below) **Does anybody know how to fix this issue? **
0
0
106
May ’25
PhotogrammetrySession Polygon Count Limit – How Is It Determined by Hardware?
Hi Apple Team, I’m working on a human portrait scanning application using PhotogrammetrySession, and I’ve been very impressed by the results. Thank you for building such a powerful and accessible photogrammetry solution into macOS! I do, however, have a question regarding mesh detail limitations on different Mac hardware configurations. When using PhotogrammetrySession.Request.Detail.custom and trying to set maximumPolygonCount = 1000000, I see the following log message: Clamped max poly count: 1000000 to device limit. 250000 is used. This is on an M1 Max with 32 GB RAM. I’m aware that PhotogrammetrySession.limits can report values like maximumInputImageDimension and maximumNumberOfInputImages, but I haven’t found documentation on how the maximumPolygonCount is determined, and what hardware specs influence it. Is it tied more to: • GPU performance (e.g. neural/graphics cores)? • CPU architecture? • Memory size or bandwidth? • Or is it fixed per SoC generation? I’d love to understand what kind of hardware upgrades (e.g. moving to M4 Pro or increasing RAM) could allow me to increase mesh complexity and generate more detailed models. Any insights would be greatly appreciated—and if this is covered in upcoming WWDC sessions or documentation, I’d be happy to tune in. Thanks in advance! KitCheng
0
0
79
May ’25
Request: More Fine-Grained Control in Object Capture (PhotogrammetrySession)
Hi Apple Team and Developers, First of all, I’d like to express my appreciation for the incredible results achieved using PhotogrammetrySession. I’ve been developing a portrait scanning app using Object Capture, and in many tests—especially with human models—I’ve found the reconstructed body surfaces are remarkably smooth and clean, often outperforming tools like Metashape and RealityCapture in terms of aesthetic results. However, I’ve encountered some challenges when working with complex areas like long hair overlapping the face. For instance, with female models where strands of hair partially occlude the face, the resulting mesh tends to merge the hair and facial geometry. This leads to distorted or “melted” facial features, likely due to ambiguity in the geometry estimation phase. Feature Suggestion: Would it be possible to allow developers to supply two versions of the input images: • One version (original) for texture generation • A pre-processed version (e.g., contrast-enhanced or CLAHE filtered) to guide mesh reconstruction only This would give us the flexibility to enhance edge features or shadow detail without affecting the final texture appearance. In other photogrammetry pipelines, applying image enhancement selectively before dense reconstruction improves geometry quality in low-contrast areas. Question: Is there any plan to support this kind of two-path workflow in future versions of PhotogrammetrySession? Or perhaps expose more intermediate stages or tunable parameters to developers? Also, any hints on what we can expect from WWDC 2025 regarding improvements to Object Capture or related vision/3D technologies? Thanks again for this powerful API. Looking forward to hearing insights from the team and other developers. Warm regards, KitCheng
0
0
49
May ’25