Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

ManipulationComponent in both parent and child entities
Hello, In my project, I have attached a ManipulationComponent to Entity A and as expected, I'm able interact with it using the built-in gestures. I have another Entity B which is a child of A that I would like to interact with as well, so I attempted to add a ManipulationComponent to B. However, no gestures seem to be registered on B; I can still interact with A but B cannot be interacted with despite having ManipulationComponents on both entities. So I'm wondering if I'm just doing something wrong, if this is an issue with the ManipulationComponent, or if this is a limitation of the API. Attached is the code used to add the ManipulationComponent to an Entity and it was done on both A and B: let mc = ManipulationComponent() model.components.set(mc) var boxShape = ShapeResource.generateBox(width: 0.25, height: 0.05, depth: 0.25) boxShape = boxShape.offsetBy(translation: simd_float3(0, -0.05, -0.25)) ManipulationComponent.configureEntity(model, collisionShapes: [boxShape]) if var mc = model.components[ManipulationComponent.self] { mc.releaseBehavior = .stay mc.dynamics.inertia = .low model.components.set(mc) } I am using visionOS 26.0; let me know if there's any additional information needed.
1
0
370
Oct ’25
Hand Tracking Latency When UITextView Becomes Active in Vision Pro Immersive Space
I'm placing sphere at finger tip and updating its position as hand move. Finger joint tracking functions correctly, but I’ve observed noticeable latency in hand tracking updates whenever a UITextView becomes active. This lag happens intermittently during app usage, lasting about 5–10 seconds, after which the latency disappears and the sphere starts following the finger joints immediately. When I open the immersive space for the first time, the profiler shows a large performance spike upto 328%. After that, it stabilizes and runs smoothly. Note: I don’t observe any lag when CPU usage spikes to 300% (upon immersive view load) yet the lag still occurs even when CPU usage remains below 100%. I’m using the following code for hand tracking: private func processHandTrackingUpdates() async { for await update in handTracking.anchorUpdates { let handAnchor = update.anchor if handAnchor.isTracked { switch handAnchor.chirality { case .left: leftHandAnchor = handAnchor updateHandJoints(for: handAnchor, with: leftHandJointEntities) case .right: rightHandAnchor = handAnchor updateHandJoints(for: handAnchor, with: rightHandJointEntities) } } else { switch handAnchor.chirality { case .left: leftHandAnchor = nil hideAllJoints(in: leftHandJointEntities) case .right: rightHandAnchor = nil hideAllJoints(in: rightHandJointEntities) } } await MainActor.run { handTrackingData.processNewHandAnchors( leftHand: self.leftHandAnchor, rightHand: self.rightHandAnchor ) } } } And here’s the function I’m using to update the joint positions: private func updateHandJoints( for handAnchor: HandAnchor, with jointEntities: [HandSkeleton.JointName: Entity] ) { guard handAnchor.isTracked else { hideAllJoints(in: jointEntities) return } // Check if the little finger tip and intermediate base are both tracked. if let tipJoint = handAnchor.handSkeleton?.joint(.littleFingerTip), let intermediateBaseJoint = handAnchor.handSkeleton?.joint(.littleFingerIntermediateTip), tipJoint.isTracked, intermediateBaseJoint.isTracked, let pinkySphere = jointEntities[.littleFingerTip] { // Convert joint transforms to world space. let tipTransform = handAnchor.originFromAnchorTransform * tipJoint.anchorFromJointTransform let intermediateBaseTransform = handAnchor.originFromAnchorTransform * intermediateBaseJoint.anchorFromJointTransform // Extract positions from the transforms. let tipPosition = SIMD3<Float>(tipTransform.columns.3.x, tipTransform.columns.3.y, tipTransform.columns.3.z) let intermediateBasePosition = SIMD3<Float>(intermediateBaseTransform.columns.3.x, intermediateBaseTransform.columns.3.y, intermediateBaseTransform.columns.3.z) // Calculate the midpoint. let midpointPosition = (tipPosition + intermediateBasePosition) / 2.0 // Position the sphere at the midpoint and make it visible. pinkySphere.isEnabled = true pinkySphere.transform.translation = midpointPosition } else { // If either joint is not tracked, hide the sphere. jointEntities[.littleFingerTip]?.isEnabled = false } // Update the positions of all other hand joint spheres. for (jointName, entity) in jointEntities { if jointName == .littleFingerTip { // Already handled the pinky above. continue } guard let joint = handAnchor.handSkeleton?.joint(jointName), joint.isTracked else { entity.isEnabled = false continue } entity.isEnabled = true let jointTransform = handAnchor.originFromAnchorTransform * joint.anchorFromJointTransform entity.transform.translation = SIMD3<Float>(jointTransform.columns.3.x, jointTransform.columns.3.y, jointTransform.columns.3.z) } } I’ve attached both a profiler trace and a video recording from Vision Pro that clearly demonstrate the issue. Profiler: https://drive.google.com/file/d/1fDWyGj_fgxud2ngkGH_IVmuH_kO-z0XZ Vision Pro Recordings: https://drive.google.com/file/d/17qo3U9ivwYBsbaSm26fjaOokkJApbkz- https://drive.google.com/file/d/1LxTxgudMvWDhOqKVuhc3QaHfY_1x8iA0 Has anyone else experienced this behavior? My thought is that there might be some background calculations happening at the OS level causing this latency. Any guidance would be greatly appreciated. Thanks!
0
0
391
Sep ’25
Low-latency live streaming using APMP Wide FoV
Is it possible to achieve sub-second end-to-end latency when displaying live streaming video using APMP (Apple Projected Media Profile) with Wide FoV? APMP supports HLS playback, but my understanding is that standard HLS introduces several seconds of latency. I would like to know whether APMP (especially Wide FoV) supports Low-Latency HLS, or if there are inherent limitations that make sub-second latency impractical. If APMP is not suitable for this use case, are there any recommended alternatives within AVFoundation or related frameworks for rendering wide-FoV live video with very low latency? Thank you for any insights.
1
0
332
1w
Vision Pro 画面传输至 Mac 后分辨率偏低
传输后的直播流分辨率显著下降,画面细节丢失、清晰度不足,导致 3D 家具商品的纹理、尺寸等关键信息无法精准展示,影响用户对商品的判断。 期望 优化流传输过程中的分辨率压缩策略,减少传输过程中的画质损耗,提升 Mac 端接收的直播流清晰度,匹配 3D 商品展示的高精度需求。
0
0
169
Dec ’25
ImmersiveSpace orphaned when WindowGroup closes
Environment visionOS 26.1, Xcode 26.1.1 Problem When a WindowGroup opens an ImmersiveSpace and the user closes the window via X button, the async Task in .onDisappear gets cancelled before dismissImmersiveSpace() completes, leaving the ImmersiveSpace active with no way to exit. Steps WindowGroup opens ImmersiveSpace in .onAppear User clicks X to close window .onDisappear fires but async cleanup cancelled ImmersiveSpace remains active, user trapped Expected ImmersiveSpace dismissed when window closes Actual ImmersiveSpace remains active Code .onAppear { Task { await openImmersiveSpace(id: "VideoCallMainCamera") } } .onDisappear { Task { await dismissImmersiveSpace() // Gets cancelled } } What I've Tried Task in .onDisappear ❌ scenePhase monitoring ❌ High priority Task ❌ .restorationBehavior(.disabled) + .defaultLaunchBehavior(.suppressed) ✅ (prevents restoration but doesn't fix immediate cleanup) Question What's the recommended pattern for ensuring ImmersiveSpace cleanup when WindowGroup closes? Is there a way to block window closure until async cleanup completes, or should ImmersiveSpaces automatically dismiss with their parent window?
1
0
208
Dec ’25
how to transition between spatial3d to spatial3DImmersive?
Hi, When viewing a spatial photo scene on the Apple Vision Pro Photos app, you can tap on the immersive icon on the top right corner to transaction from the window presenting the image as spatial3d to an immersive photo scene with spatial3DImmersive where the window borders disappear. Could someone explain how to achieve that? I tried to do it but once I transition from spatial3d to spatial3DImmersive I can see still see a rectangle around the spatial image. Thanks.
1
0
863
Nov ’25
Persistent Entity Position
I want to let users place 2D/3D “artworks” on detected walls and have them reappear in exactly the same real‑world spot after quitting and relaunching the app (like widgets do, but for my own entities).Environment: Xcode 26, visionOS 2.0, RealityKit + ARKitSession/WorldTrackingProvider Entities are parented to a holder that’s aligned to a wall via plane/mesh raycasts. What I’ve tried: Create a WorldAnchor at placement, save UUID + full 4×4 transform On next launch, re-create the WorldAnchor (or set the saved transform) and attach the entity Gate restore on relocalization/mesh updates and disable all raycast/search after restore Issue: After relaunch, placement still resolves relative to current device pose, not the same wall position. Questions: Is there a public API in visionOS 2.0 to persist app‑managed world anchors across sessions (room‑fixed), e.g., AnchorStore or equivalent? If not, what’s the recommended pattern to reliably restore wall‑anchored content? Are persistence features mentioned for widgets/windows available to third‑party RealityKit entities?
1
0
239
Oct ’25
mipmapsMode trade-off?
I am building a 360 photo viewer in VisionOS 26. Which allows the user to choose a 2 by 1 jpg and then renders it with a sphere mesh entity. And I use: TextureResource(contentsOf: url, options: options). I noticed two situations here in terms of mipmaps options. When setting "mipmapsMode: .none": The graphic quality within the "gaze area" looks sharp and clear The two poles (top and bottom) are perfectly rendered Massive shimmer around the "gaze area" When setting "mipmapsMode: .allocateAndGenerateAll": The graphic looks slightly blurrier than in ".none" within the "gaze area" The two poles are very blurry and hard to recognize the texture Much less shimmer around the "gaze area" My question would be: Is there a way to have the perfect graphic quality in ".none" without the massive shimmer? Thank you! Screenshots: mipmapsMode: .none mipmapsMode: .allocateAndGenerateAll
0
0
230
Oct ’25
Share Extensions embedded in visionOS apps
I'm trying to add a feature to my app to allow a user to import items from other apps, like Safari, via the share sheet. I've done this many times on iOS/iPadOS easily with a Share Extension. From what I can tell, Xcode tells me share extensions are not available on visionOS - though my experience on device tells me differently (It seems Reminders, Notes & more implement them somehow.) I was finally able to get it working on device only...but I can now no longer test in the simulator, and have not found a way to distribute this app. When attempting to run on the simulator, I get this issue: Please try again later. Appex bundle at /Users/jason/Library/Developer/CoreSimulator/Devices/09A70160-4F4F-4F5E-B679-F6F7D876D7EF/data/Library/Caches/com.apple.mobile.installd.staging/temp.6OAEZp/extracted/LaunchBar.app/PlugIns/LaunchBarShareExtension.appex with id co.swiftfox.LaunchBar.ShareExtension specifies a value (com.apple.share-services) for the NSExtensionPointIdentifier key in the NSExtension dictionary in its Info.plist that does not correspond to a known extension point. When trying to archive an upload to test flight, I get this similar error: Invalid Info.plist value. The value for the key 'DTPlatformName' in bundle LaunchBar.app/PlugIns/LaunchBarShareExtension.appex is invalid. (ID: 207610c7-b7e1-48be-959b-22a43cd32d16) The app is for visionOS only - which I'm thinking might be the problem? The share extension is "Designed For iPhone" and requires me to include iPhone as a run destination. In the worst case I can build an iPhone UI for the app but I'd rather not, as it is very specific to visionOS. Has anyone successfully launched a share extension on a visionOS only app? I have an iPad app with a share extension that shows up fine on visionOS, but the issue seems to be specifically with visionOS only apps.
1
0
109
Oct ’25
WorldTrackingProvider stops working on device
After re-launching the immersive space in my app 5-10 times, the WorldTrackingProvider stops working. Only restarting the app will allow it to start working again. Only on device, not the simulator. I get these errors when it happens: The device_anchor can only be queried when the world tracking provider is running. ARPredictorRemoteService <0x107cbb5e0>: Service configured with error: Error Domain=com.apple.arkit.error Code=501 "(null)" Remote Service was invalidated: <ARPredictorRemoteService: 0x107cbb5e0>, will stop all data_providers. ARRemoteService: remote object proxy failed with error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service with pid 81 named com.apple.arkit.service.session was invalidated from this process." UserInfo={NSDebugDescription=The connection to service with pid 81 named com.apple.arkit.service.session was invalidated from this process.} ARRemoteService: weak self released before invalidation @Observable class VisionPro { let session = ARKitSession() let worldTracking = WorldTrackingProvider() func transformMatrix() async -> simd_float4x4 { guard let deviceAnchor = worldTracking.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else { return .init() } return deviceAnchor.originFromAnchorTransform } func runArkitSession() async { Task { try? await session.run([worldTracking]) } } } which I call from my RealityView: .task { await visionPro.runArkitSession() }
3
0
441
Feb ’25
visionOS widget dimensions?
Is there any size guidance for the new WidgetKit integration on visionOS? The Widget HIG provides dimensions for all the widget size classes on iOS, iPadOS and watchOS, but has not been updated for visionOS. https://developer.apple.com/design/human-interface-guidelines/widgets My potential widget use case is image based, so I'm looking to better understand the optimal size, resolution etc I would need, particularly for the new visionOS specific extra large widget size.
0
0
576
Jul ’25
how to achieve "concave in" glass view look?
I have been trying to implement this look where a component looks "pushed in" but I could not find any resources regarding this effect. The closest I got was a combination of a RoundedRectangle and .glassBackgroundEffect(), but this makes the view look pushed out, instead of pushed in. I was wondering if this is achievable in SwiftUI level, or even in UIKit level.
1
0
123
Apr ’25
Body segmentation/occlusion on the Apple Vision Pro
Hello, I am currently working on a Unity project for the Apple Vision Pro. I would like to have people passing in front of the virtual objects occlude the virtual objects that are behind. Something similar to this: https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people I could unfortunately not find any documentation about this. Is it possible to implement body segmentation or occlusion on the Apple Vision Pro? If it's not currently supported, are there plans to add it? Any ideas on how to achieve this with existing tools? Thanks! Mehdi
1
0
414
Feb ’25
DragGesture that pivots with the user in visionOS
Apple published a set of examples for using system gestures to interact with RealityKit entities. I've been using DragGesture a lot in my apps and noticed an issue when using it in an immersive space. When dragging an entity, if I turn my body to face another direction, the dragged entity does not stay relative to my hand. This can lead to situations where the entity is pulled very close to me, or pushed far way, or even ends up behind me. In the examples linked above, there are two versions of how they use drag. handleFixedDrag: This is similar to what I'm doing now. It uses the value from value.gestureValue.translation3D as the basis for the drag handlePivotDrag: This version aims to solve the problem I described above by using value.inputDevicePose3D as the basis of the gesture. I've tried the example from handlePivotDrag, but it has one limitation. Using this version, I can move the entity around me as if it were on the inside of an arc or sphere. However, I can no longer move the entity further or closer. It stays within a similar (though not exact) distance relative to me while I drag. Is there a way to combine these concepts? Ideally, I would like to use a gesture that behaves the same way that visionOS windows do. When we drag windows, I can move them around relative to myself, pull them closer, push them further, all while avoiding the issues described above. Example from handleFixedDrag mutating private func handleFixedDrag(value: EntityTargetValue<DragGesture.Value>) { let state = EntityGestureState.shared guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") } if !state.isDragging { state.isDragging = true state.dragStartPosition = entity.scenePosition } let translation3D = value.convert(value.gestureValue.translation3D, from: .local, to: .scene) let offset = SIMD3<Float>(x: Float(translation3D.x), y: Float(translation3D.y), z: Float(translation3D.z)) entity.scenePosition = state.dragStartPosition + offset if let initialOrientation = state.initialOrientation { state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil) } } Example from handlePivotDrag mutating private func handlePivotDrag(value: EntityTargetValue<DragGesture.Value>) { let state = EntityGestureState.shared guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") } // The transform that the pivot will be moved to. var targetPivotTransform = Transform() // Set the target pivot transform depending on the input source. if let inputDevicePose = value.inputDevicePose3D { // If there is an input device pose, use it for positioning and rotating the pivot. targetPivotTransform.scale = .one targetPivotTransform.translation = value.convert(inputDevicePose.position, from: .local, to: .scene) targetPivotTransform.rotation = value.convert(AffineTransform3D(rotation: inputDevicePose.rotation), from: .local, to: .scene).rotation } else { // If there is not an input device pose, use the location of the drag for positioning the pivot. targetPivotTransform.translation = value.convert(value.location3D, from: .local, to: .scene) } if !state.isDragging { // If this drag just started, create the pivot entity. let pivotEntity = Entity() guard let parent = entity.parent else { fatalError("Non-root entity is missing a parent.") } // Add the pivot entity into the scene. parent.addChild(pivotEntity) // Move the pivot entity to the target transform. pivotEntity.move(to: targetPivotTransform, relativeTo: nil) // Add the targeted entity as a child of the pivot without changing the targeted entity's world transform. pivotEntity.addChild(entity, preservingWorldTransform: true) // Store the pivot entity. state.pivotEntity = pivotEntity // Indicate that a drag has started. state.isDragging = true } else { // If this drag is ongoing, move the pivot entity to the target transform. // The animation duration smooths the noise in the target transform across frames. state.pivotEntity?.move(to: targetPivotTransform, relativeTo: nil, duration: 0.2) } if preserveOrientationOnPivotDrag, let initialOrientation = state.initialOrientation { state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil) } }
1
0
523
Feb ’25
Safari-like toolbar in visionOS
I like the toolbar visionOS's Safari uses for back & forward page, share, etc. It floats above the window. My attempt to do this with ornaments isn't as satisfying as they partially cover the window. My attempts with toolbar haven't produced visible results. Is this Safari-style toolbar for visionOS exposed by Apple in the API's? If so, could someone point me to documentation or sample code? Thanks!
1
0
225
Oct ’25