Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Apple Vision Pro - hand tracking with gloves
I am considering adding finger pad haptics (Data flow for haptic feedback is directed from the AVP to the fingers, not vice versa). Simple piezos wired to a wrist connection holding the driver/battery. But I'm concerned it will impact the hand tracking. Any guidance regarding gloves and/or the size of any peripherals attached to fingers? Or, if anyone has another (inexpensive) low profile option on the market please LMK. Thanks
0
0
226
Feb ’25
Camera settings at intrinsic calibration time
Hi everyone, I am wondering under which settings the camera(s) were set by the time they were calibrated. For instance, one aspect that is easy to find is the reference resolution of the images taken when calibrating the intrinsics, this is by retrieving intrinsicMatrixReferenceDimensions. Making sure that the principal point is referenced to the by the time resolution used when the calibration was ongoing. However, recently I saw that there are focusing modes that potentially displace the lens' physical position. Settings like: AutoFocusRangeRestriction: none, near, far setFocusModeLocked: Locks the lens position at the specified value, and sets the focus mode to a locked state. My concern lies the impact this focusing lens displacements can have on the intrinsic matrix parameters, like these parameters no longer describe the camera since the lens position has changed. In simple words, what is the focus 'mode'/'range' the cameras were set when calibrating them for intrnisics?
0
0
510
Jan ’25
I need to loop my videoMaterial.
I need to loop my videoMaterial and I don't know how to make it happen in my code. I have included an image of my videoMaterial code. Any help making this happen with be greatly appreciated. Thank you, Christopher
1
0
117
Jun ’25
For a third year, no screenshot capability for immersive visionOS apps... here's a workaround?
Since only the user can take a screenshot using the Apple Vision Pro's top buttons, the only workaround available to an immersive app that needs a screenshot to document the user's creative interior design choices is ask the user to take a screenshot wait until the user taps a button indicating the screenshot has been taken then the app asks the user to select the screenshot when the app opens the PhotoPicker when the user presses Done, the screenshot is handed off to the app. One wonders why there is no Apple Api for doing this in a simple privacy protective way such as: When called, the Apple api captures the screenshot in Apple secured memory The api displays the screenshot to the user with appropriate privacy warnings and asks if the user wants to a. share this screenshot with the app, or b. cancel, c. retake the screenshot If the user approves, the app receives the screenshot
3
0
83
Jun ’25
The Question Of Two ARView Together
First, I scan first room using the roomplan api. Because I need scan second room, I stop it by “captureSession.stop(pauseARSession: false)”, I think the Arsession is continue work at that time. Second, before the another room will scan, I want to run another ARView. (in order to detect some objects which are not detected by Roomplan in first room) But, at this time, the second ARView(there is an ARView in roomplan, I think) will always black screen, can’t normally work. This is the question I want to resolve. Please help me let the second ARView go well.
0
0
109
Mar ’25
Build Vision Pro failed
`error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults") error: Tool exited with code 1
5
0
635
Jul ’25
I am developing a Immersive Video App for VisionOs but I got a issue regarding app and video player window
In Vision OS app, We have two types of windows: Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content. Immersive Space Window – This opens only when a user starts streaming or playing a video. Issue: When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting. However, this causes the app to restart instead of resuming from its previous state. Desired Behavior: I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting. Attempts & Challenges: Tried managing opacity, visibility, and state preservation, but none worked as expected. Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground. Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
1
0
118
Mar ’25
PhotogrammetrySession Polygon Count Limit – How Is It Determined by Hardware?
Hi Apple Team, I’m working on a human portrait scanning application using PhotogrammetrySession, and I’ve been very impressed by the results. Thank you for building such a powerful and accessible photogrammetry solution into macOS! I do, however, have a question regarding mesh detail limitations on different Mac hardware configurations. When using PhotogrammetrySession.Request.Detail.custom and trying to set maximumPolygonCount = 1000000, I see the following log message: Clamped max poly count: 1000000 to device limit. 250000 is used. This is on an M1 Max with 32 GB RAM. I’m aware that PhotogrammetrySession.limits can report values like maximumInputImageDimension and maximumNumberOfInputImages, but I haven’t found documentation on how the maximumPolygonCount is determined, and what hardware specs influence it. Is it tied more to: • GPU performance (e.g. neural/graphics cores)? • CPU architecture? • Memory size or bandwidth? • Or is it fixed per SoC generation? I’d love to understand what kind of hardware upgrades (e.g. moving to M4 Pro or increasing RAM) could allow me to increase mesh complexity and generate more detailed models. Any insights would be greatly appreciated—and if this is covered in upcoming WWDC sessions or documentation, I’d be happy to tune in. Thanks in advance! KitCheng
0
0
114
May ’25
[VisionPro] Placing virtual entities around my arm
Hi everyone, I'm developing a MR Vision Pro app where I’d like to anchor virtual objects (such as UI elements) around the user's arm. However, I’ve noticed that Vision Pro seems to mask out the area where the user’s real arm is, hiding virtual content in that region so that you see your real arm. Is there a way to render virtual elements on the user's arm—so that it looks like the object is placed directly on the arm despite the real-world passthrough? I was hoping there might be a way to adjust the depth or behavior of this masked-out region. Any insights or workarounds would be greatly appreciated! Thanks :)
1
0
94
Mar ’25
Any recommended content-aware compression strategy for .ktx textures in Reality Composer Pro?
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets. I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity. In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging: Low-complexity images are compressed more aggressively The final packaged file size varies based on content complexity Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment. Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro? Or any best practices to optimize .ktx sizes based on image complexity? Thanks!
0
0
122
May ’25
How to add visual thickness to a glass background view
Hi guys, In visionOS, when using a ZStack decorated with .glassBackgroundEffect(), you can see the 3D glass background from the front, but when viewed from the side, the view appears to have no thickness. However, I noticed that in an app built by Apple, when viewing a glass background view from the side, it appears to have thickness. I tried adding .frame(depth:) to a glass background view, but it appears as two separate layers spaced by the depth value. My question is: Is there a view modifier that adds visual thickness to a glass background view, as shown in the picture? Or, if not, how should I write a custom view modifier to achieve this effect? Thanks!
0
0
100
May ’25
How to make .blur(radius:) visually affect RealityView content?
According to the official documentation, the .blur(radius:) modifier could apply gaussian blur to a realityview. However, when applied directly to a RealityView, nothing inside it (neither 2D attachments nor 3D entities) appears to be blurred. Here’s the test code: struct ContentView: View { var body: some View { VStack(spacing: 20) { Text("Above the RealityView") .font(.title) RealityView { content, attachments in if let text = attachments.entity(for: "2dView") { text.position.y = 0.1 content.add(text) } let box = ModelEntity( mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)] ) content.add(box) } attachments: { Attachment(id: "2dView") { Text("Above the Box") .font(.title) } } .frame(width: 300, height: 300) .border(.blue) .blur(radius: 99) // Has no visual effect Text("Below the RealityView") .font(.subheadline) } .padding() } } My question: How can I make .blur(radius:) visually affect the content rendered in a RealityView? Can you provide a working example that .blur() to visually affect any part of a RealityView? Thanks!
0
0
101
May ’25
Launching a Unity fully immersive game from SwiftUI
I am trying to launch a fully immersive game from Unity on a SwiftUI view. The game is using Metal Rendering with Compositor Services. I added the unity Xcode project into the workspace, added the necessary bridge code. When I click on the button to call ufw?.showUnityWindow(), it does not start and I get the following in the console: AR session failed to start after 5 seconds. Is the app configured to use an immersive space?
2
0
103
Jun ’25
VisionOS Main Camera Enterprise API: Development license into distribution for Business Store
Hello, We've been working for months now on an App for the Vision Pro. (it's been great btw!) We already have an App in the App Store for iOS, and have been migrating our platform from the Microsoft Hololens 2 to the AVP: https://apps.microsoft.com/detail/9NPPP031VHD1 We require the Main Camera access and already have gotten the Enterprise.license for development purposes. Unfortunately, we cannot publish our Business App (which uses an Enterprise API) under the same Name/Bundle ID as our iOS App because it would conflict with our current Distribution Method. We arrived at the conclusion that we need a new Enterprise.license under a different Bundle ID to create a new App for the Business Store. Has anyone been in the same boat as us, and tried to publish to the Business Store while already having an App in the Public App Store under the same name? We applied to get another license for distribution under another name (with "Pro" at the end), but it's been stuck in limbo for over a month now (probably because the new bundle ID doesn't have any track record). Anyhow, thanks for any help, we're open to suggestions as to how to proceed!
0
0
449
Feb ’25
Is it Possible to Place a 3D Model at the Exact Position of a QR Code in AR with ARKit ?
I'm working on an iOS app using ARKit and RealityKit where I scan QR codes and want to place 3D models at the exact position of the QR code in the real world. Is it possible to accurately place a 3D model at the exact position of a QR code in AR using ARKit and RealityKit? Specifically, I want the model to appear at the precise location where the QR code is detected, rather than just somewhere in the AR space. If this is possible, could you point me in the right direction or recommend the best approach to achieve this? Thank you for your help!
0
0
105
May ’25