When assigning a ManipulationComponent to an Entity SceneEvents.WillRemoveEntity will be called for that Entity.
Expected Behavior: the Entity is not (even if temporarily) removed from the Scene and no SceneEvents will be triggered as a result of assigning a ManipulationComponent.
FB20872220
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
Hi,
Few days ago, Apple had another "Meet with Apple" event regarding immersive video : Create immersive media experiences for visionOS | Meet with Apple days one and 2 and it is not appearing on my Developer app in Mac. The same happened to
IETF HLS Interest Day | Meet with Apple.
Any tip?
Kind regards ,
Bruno
Topic:
Spatial Computing
SubTopic:
General
I want to let users place 2D/3D “artworks” on detected walls and have them reappear in exactly the same real‑world spot after quitting and relaunching the app (like widgets do, but for my own entities).Environment: Xcode 26, visionOS 2.0, RealityKit + ARKitSession/WorldTrackingProvider Entities are parented to a holder that’s aligned to a wall via plane/mesh raycasts.
What I’ve tried:
Create a WorldAnchor at placement, save UUID + full 4×4 transform On next launch, re-create the WorldAnchor (or set the saved transform) and attach the entity Gate restore on relocalization/mesh updates and disable all raycast/search after restore Issue: After relaunch, placement still resolves relative to current device pose, not the same wall position.
Questions:
Is there a public API in visionOS 2.0 to persist app‑managed world anchors across sessions (room‑fixed), e.g., AnchorStore or equivalent?
If not, what’s the recommended pattern to reliably restore wall‑anchored content?
Are persistence features mentioned for widgets/windows available to third‑party RealityKit entities?
I want to let users place 2D/3D “artworks” on detected walls and have them reappear in exactly the same real‑world spot after quitting and relaunching the app (like widgets do, but for my own entities).Environment:
Xcode 26, visionOS 2.0, RealityKit + ARKitSession/WorldTrackingProvider
Entities are parented to a holder that’s aligned to a wall via plane/mesh raycasts
What I’ve tried:
Create a WorldAnchor at placement, save UUID + full 4×4 transform
On next launch, re-create the WorldAnchor (or set the saved transform) and attach the entity
Gate restore on relocalization/mesh updates and disable all raycast/search after restore
Issue:
After relaunch, placement still resolves relative to current device pose, not the same wall position.
Questions:
Is there a public API in visionOS 2.0 to persist app‑managed world anchors across sessions (room‑fixed), e.g., AnchorStore or equivalent?
If not, what’s the recommended pattern to reliably restore wall‑anchored content?
Are persistence features mentioned for widgets/windows available to third‑party RealityKit entities?
Topic:
Spatial Computing
SubTopic:
General
My app is getting video from UVC device, and I wish to display it in an Immersive Space. But when I open Immersive Space, the UVC capture will just stop.AI said it's due to confliction in Camera pipeline. But I don't really understand, I don't need to use any on device camera, why it conflict with my UVC...
For the M2 Apple Vision Pro, there's "a general guideline, we recommend no more than 500 thousand triangles for an immersive scene, with 250 thousand for applications in the shared space." --https://developer.apple.com/videos/play/wwdc2024/10186/?time=147
Is there a revised recommendation for the M5 Apple Vision Pro?
I'm currently implementing 180° / 360° immersive video for my app.
I easily implemented 360° by just applying VideoMaterial to flipped sphere.
But I'm stuck at 180°. I'm trying to implement by applying VideoMaterial to hemisphere (half sphere). I want to make VideoMaterial to be visible half front sphere and half back sphere transparent / clear.
Would there be any advice / information / idea to implement this? Your help would be grateful.
Hi, I'm currently implementing 180° / 360° property for immersive video in my app.
I was able to implement 360° easily by just giving VideoMaterial to flipped sphere.
However, I'm bit stuck at 180°. I want to implement by setting VideoMaterial to hemisphere mesh. But since RealityKit doesn't provide default function such like MeshResource.generateHemisphere yet, I just want to apply VideoMaterial half front visible, and half back transparent. I thought this would make my sphere looks like hemisphere.
But I can't find my way to implement this method.. I would appreciate any advice / idea / information that might help.
I am testing out the Gen 2 of the developer strap on my Vision Pro M2 and I have only been able to get USB 2 speeds when connecting it to my MacBook Pro Max M3. I used the official Apple Thunderbolt 4 cable, which does get Thunderbolt speeds on my T7 Touch drive. Has anyone figured out a solution for this issue?
The Gen 2 developer strap does advertise 20 Gb/s speeds.
Topic:
Spatial Computing
SubTopic:
General
In Reality Composer Pro, why is the Sky Sphere so much larger than the Sky Dome?
By my estimate, the Sky Sphere has a radius of 100m, while the Sky only has a radius of only 12m.
Hi I know it's possible to play equirectangular VR180 video either SBS or MV-HEVC. And for fisheye video, the only way I know is to convert it into an AIVU for playback.
Is there any way to directly play fisheye video using AVPlayer? Thanks a lot!
I've been experimenting with the Muse pen and understand that it can be accessed by my app through a SpatialTrackingSession, but is there any current or planned support for devices like this as for general UI input like game controllers are? For example, using the button as a tap analogue for SwiftUI views.
Topic:
Spatial Computing
SubTopic:
General
Since updating to iOS 26.0 (and confirmed on 26.1), ARBodyTrackingConfiguration no longer detects a valid ARBodyAnchor on devices with LiDAR (e.g., iPhone 15 Pro, iPhone 17 Pro Max).
This issue reproduces in custom projects and Apple’s official sample “Capturing Body Motion in 3D”.
The AR session runs normally, but the delegate call:
func session(_ session: ARSession, didUpdate anchors: [ARAnchor])
never yields an ARBodyAnchor with valid joint transforms.
All joints return nil when calling:
body.skeleton.modelTransform(for: jointName)
resulting in 0 valid joints per frame.
Environment
• Device: iPhone 17 Pro Max (LiDAR)
• iOS: 26.0 / 26.1
• Xcode: 16.0 (stable)
• Framework: ARKit + RealityKit
• Configuration used:
config.worldAlignment = .gravityAndHeading
config.isAutoFocusEnabled = true
config.environmentTexturing = .none
session.run(config)
Also tested: with and without frameSemantics = .bodyDetection
Expected Behavior
ARBodyAnchor should be detected and body.skeleton should contain ~89 valid joints with continuous updates.
I am building a 360 photo viewer in VisionOS 26. Which allows the user to choose a 2 by 1 jpg and then renders it with a sphere mesh entity. And I use: TextureResource(contentsOf: url, options: options).
I noticed two situations here in terms of mipmaps options.
When setting "mipmapsMode: .none":
The graphic quality within the "gaze area" looks sharp and clear
The two poles (top and bottom) are perfectly rendered
Massive shimmer around the "gaze area"
When setting "mipmapsMode: .allocateAndGenerateAll":
The graphic looks slightly blurrier than in ".none" within the "gaze area"
The two poles are very blurry and hard to recognize the texture
Much less shimmer around the "gaze area"
My question would be: Is there a way to have the perfect graphic quality in ".none" without the massive shimmer?
Thank you!
Screenshots:
mipmapsMode: .none
mipmapsMode: .allocateAndGenerateAll
I have an open Feedback conversation with Apple on this topic, but I am curious if others have run into this, or want to try out my sample code in their set up.
there are two API’s for reading controller buttons, axis, and D pads: GCPhysicalInputProfile and GCControllerLiveInput. There are inconsistencies in behaviour between the two of them. Apple recommends we use GCControllerLiveInput, however, there are some capabilities on these controllers that are only accessible through GCPhysicalInputProfile, as I’ll discuss below.
PSVR2 R2/L2 buttons, a.k.a. triggers, have force input analogue values. These can only be accessed on GCPhysicalInputProfile
PSVR2 thumbstick direction values are read through “axes” on GCPhysicalInputProfile, but only “dpads” on GCControllerLiveInput
on both GCPhysicalInputProfile and GCControllerLiveInput, All pressed events of all buttons are fired properly using generic aliases ( Trigger, Grip ,Menu, Right Thumbstick, Left Thumbstick, Right Button A & B (Circle & Cross), Left Button A&B (Triangle and Square) ). Apple reserves the system button as the equivalent of a home button for the OS.
on GCPhysicalInputProfile, touch events are fired when the button is also pressed, but not for only touches.
on GCControllerLiveInput , Touch events only works for the following buttons: Left Thumbstick, Right Thumbstick, Right Button A (Circle), and Right Button B (Cross). But Right Button B touch event isn’t labelled correctly, it fires as the Right Button A event.
I observed this inside ALVR which uses a polling based approach to event processing:
https://github.com/alvr-org/alvr-visionos/blob/17b5968f9d894944b53e97134b39dfce0993302a/ALVRClient/WorldTracker.swift#L301
To simplify to see this on a very simple app, I used the Apple example TrackingAccessories application:
https://developer.apple.com/documentation/ARKit/tracking-accessories-in-volumetric-windows
I’ve attached the code that replaces the AccessoryTrackingModel class. I added code that prints out what is touched/pressed, see the trackAllConnectedSpatialControllers method:
https://github.com/svrc/TrackingAccessories
Hi,
we've developed an app for Vision Pro that utilises the GroupActivitites SDK to provide shared experiences for our users.
Remote Participation works great, but we can't get nearby sharing to work.
The behaviour we're observing:
User 1 engages share sheet from Volume, 2nd Vision Pro is visible.
User 1 starts nearby sharing
Session initialisation runs for approx. 30 seconds, then fails
Sometimes, the nearby participant doesn't show up at all after the initialisation has failed once.
As stated in the Configure your visionOS app for sharing with people nearby article, we didn't make any changes to our implementation to support nearby sharing.
Any help would be greatly appreciated.
Kind regards,
David
Hi, I have a hand model that is in FBX and I'm exporting it to USD in Blender. I get a skinned mesh and while I can track the whole hand how do I track each joint and assign it and animate the skinned mesh itself. All my attempts say this is not possible in RealityKit as of now. True?
Hi All,
We're a studio building an app and as part of a scene we have a 3D asset with a smoke particle emitter and a curved mesh that plays video. I notice that when the video alone is played or the particle effect alone is done then the scene works fine but the frame rate drops drastically when both are turned on.
How do I solve this because this is an important storytelling feature.
Hello, I've pre-ordered the Logitech Muse with hopes of developing with it, but have yet to find any documentation relating to the capabilities it will have/any APIs that will be available to take advantage of the Muse. Is anyone aware of what might become available?
Thank you in advance.
When scanning multiple rooms (10+) in a single structure using ARWorldMap for coordinate space consistency, RoomCaptureSession throws CaptureError.exceedSceneSizeLimit. The instructions here (https://developer.apple.com/documentation/roomplan/scanning-the-rooms-of-a-single-structure) provide exactly what I am doing to keep the underlying ARSession alive (by calling captureSession.stop(pause: false)) and save the results before a user moves to the next room. Scanning 11 or so rooms will cause the user to hit the exceedSceneSizeLimit error. The ARWorldMap is about 58 MB and always is around this size when hitting this issue. No anchors are present and all the data seems to be from tracking data.
On iPad devices (where I do not see this issue) the ARWorldMap grows as a significantly slower rate in size.
I save the ARWorldMap after each room is scanned and confirmed by the user. If I use the ARMap to initialize the ARSession (as described in the docs) the session will immediately error with "exceedSceneSizeLimit" once the captureSession.run() is executed. Occasionally it will allow me/the user to scan again, but either breaks mid scan or the following.
This has been working fine for the past 2 years and users have been able to scan dozens of rooms without issue. It seems only lately that it has been a problem.
I would expect the ARWorldMap to be allowed for much bigger sizes. At this point I can just about scan more area of my house with a single scan than I can when I use different captureSessions.
Few observations:
This happens on my iPhone 15 Pro Max, my iPhone 17 Pro, but not my iPad M4 (maybe memory related?). It is possible if scanning many more rooms it would happen on the iPad too.
I have tried things such as resetting the ARConfig on the underlying ARSession to reset some, but this doesn't work.
I have tried to create a new ARWorldMap and move the origin to the older map to clear out tracking data. This almost works but causes a mess of issues when a user moves at all due to the unshared coordinate space.
I believe there are three active issues regarding this: FB14454922, FB15035788, FB20642944
Could we get an update for this issue? It is a production issue and severely limits my user experience in my production application.