Visionos26 Enterprise api has the new feature: Shared Coordinate Space, participants exchange their coordinate data by SharedCoordinateSpaceProvider through their own network, when shared coordinate space established with nearby participants, the event: connectedParticipantIdentifiers(participants: [UUID]) will be received.
But the Event.participantIdentifier still be an invalid default value(00000000-0000-0000-FFFF-FFFFFFFF) in this time, I wonder when or how I can get a valid event.participantIdentifier, or is there some other way to get the local participantIdentifier?
Or If it's a bug, please fix it in later beta release version, thank you.
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello again!
So, I am trying my best to figure out how to get a simple, fully Immersive scene that is navigable via the PS5 controller to work on Vision Pro. I am even using ChatGPT (both externally AND in Xcode when it makes fix recommendations) however, I seem to be moving around in circles, having to constantly send error messages back to ChatGPT, then try the new "confirmed to work" fixed code, only to have more errors. I also have explicitly let the AI know that I'm using all the current betas, only to learn there has been lots of stuff deprecated (of course!).
This is my initial query to ChatGPT:
"How can I create a fully immersive VR experience for Apple Vision Pro, using a PS5 game controller to navigate the user via first person view through a 3d scene?"
And then down the rabbit hole the past few days. Seriously, I thought AI was supposed to be helpful! LOL
Per Xcode logs, I'm seeing thumbstick movement, but simply cannot get the camera to move. Is this even possible?! Is it a camera defined in Reality Composer Pro? Or, does it need to be defined outside. I'm trying to follow along with the AI, but it seems to go back and forth with different solutions. Here is the latest it's spitting out:
__
Thanks for this super clear log — and you’re absolutely right: the gamepad input is working, CameraRoot is moving as expected… but your actual camera view is not changing.
This strongly suggests the issue is that RealityKit is not using your custom camera (PerspectiveCameraComponent) for rendering the view.
⸻
🔍 What’s happening
Even though CameraRoot is moving (with correct logs), RealityKit will default to its own internal camera unless:
1. You explicitly assign your camera entity to be the one used, or
2. Your camera is the only one and RealityKit picks it up automatically.
But in visionOS 2.0 / Xcode 26, there seems to be a bug or behavior change where RealityKit no longer auto-uses your custom PerspectiveCameraComponent in RealityView unless it’s:
• properly set with content.camera, or
• part of a .usda/Reality Composer camera hierarchy that’s already active.
Unfortunately:
🟥 RealityView.camera = .entity(cameraRoot)
❌ Is no longer allowed (we confirmed this breaks with a type error)
⸻
✅ The Reliable Workaround: Attach your camera to the Immersive scene itself
Since we’re loading a .usda scene (Reality Composer Pro), here’s the most reliable path forward:
✔️ Option A: Embed your camera into the Reality Composer Pro scene
1. Open your Reality Composer Pro scene
2. Add a Camera entity directly in your Immersive.usda scene
3. Name it something like "PlayerCamera"
4. Position it at [0, 1.6, 0]
5. Leave it selected as the default camera
Then in your Swift code:
• Find the camera entity with .findEntity(named:)
• Move that entity instead of creating a new one.
⸻
✔️ Option B: Force RealityKit to use your custom camera (via transform parenting)
Since direct camera assignment is unavailable, we can use a trick:
Move the whole RealityView scene via a parent entity with a camera.
But let’s go with Option A, since you’ve already got a Reality Composer Pro scene.
__
Can any developer over there can make a suggestion on how to move forward? I JUST want to create a sandbox template so I can start exploring 3d scenes created in Maya and sent over to headset. I'll deal with animation next, but come on, why is this so difficult to get working? I'm not a programmer, but have been trying to wrap my head around Xcode and SwiftUI. This needs to be much simpler. Or, you need to provide us creatives with better sample templates and non programmer speak on how to set this up properly. Ideally, you HIRE us 3d professionals to work side by side with the programmers to help make these tools useable - especially Reality Composer Pro. Seriously, I am making a concerted effort to use the native tools, even though I would love to be porting Unreal Engine scenes over.
If anyone can help point me in the right direction, coming from a 3D Creator/Animator/Modeler perspective, I, and my fellow peers in the XR/AR/VR community would greatly appreciate it. Thank you.
Hello! I’m excited to see that Look to Scroll has been included in visionOS 26 Beta. I’m aiming to achieve a feature where the user’s gaze at a specific edge automatically scrolls to that position. However, I’ve experimented with ScrollView and haven’t been able to trigger this functionality. Could you advise if additional API modifiers are necessary? Thank you!
I am trying to get the new PresentationComponent working in VisionOS26 as seen in this WWDC video:
https://developer.apple.com/videos/play/wwdc2025/274/?time=962 (18:29 minutes into video)
Here is some other example code but it doesn't work either: https://stepinto.vision/devlogs/project-graveyard-devlog-002/
My simple Text view (that I am adding as a PresentationComponent) does not appear in my RealityView even though the entity is found. Here is a simple example built from an Xcode immersive view default project:
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
if let materializedImmersiveContentEntity = try? await Entity(named: "Test", in: realityKitContentBundle) {
content.add(materializedImmersiveContentEntity)
var presentation = PresentationComponent(
configuration: .popover(arrowEdge: .bottom),
content: Text("Hello, World!")
.foregroundColor(.red)
)
presentation.isPresented = true
materializedImmersiveContentEntity.components.set(presentation)
}
}
}
}
}
Here is the Apple reference: https://developer.apple.com/documentation/realitykit/presentationcomponent
I'm developing a VisionOS app with bouncing ball physics and struggling to achieve natural bouncing behavior using RealityKit's physics system. Despite following Apple's recommended parameters, the ball loses significant energy on each bounce and doesn't behave like a real basketball, tennis ball, or football would.
With identical physics parameters (restitution = 1.0), RealityKit shows significant energy loss. I've had to implement a custom physics system to compensate, but I want to use native RealityKit physics. It's impossible to make it work by applying custom impulses.
Ball Physics Setup (Following Apple Forum Recommendations)
// From PhysicsManager.swift
private func createBallEntityRealityKit() -> Entity {
let ballRadius: Float = 0.05
let ballEntity = Entity()
ballEntity.name = "bouncingBall"
// Mesh and material
let mesh = MeshResource.generateSphere(radius: ballRadius)
var material = PhysicallyBasedMaterial()
material.baseColor = .init(tint: .cyan)
material.roughness = .float(0.3)
material.metallic = .float(0.8)
ballEntity.components.set(ModelComponent(mesh: mesh, materials: [material]))
// Physics setup from Apple Developer Forums
let physics = PhysicsBodyComponent(
massProperties: .init(mass: 0.624), // Seems too heavy for 5cm ball
material: PhysicsMaterialResource.generate(
staticFriction: 0.8,
dynamicFriction: 0.6,
restitution: 1.0 // Perfect elasticity, yet still loses energy
),
mode: .dynamic
)
ballEntity.components.set(physics)
ballEntity.components.set(PhysicsMotionComponent())
// Collision setup
let collisionShape = ShapeResource.generateSphere(radius: ballRadius)
ballEntity.components.set(CollisionComponent(shapes: [collisionShape]))
return ballEntity
}
Ground Plane Physics
// From GroundPlaneView.swift
let groundPhysics = PhysicsBodyComponent(
massProperties: .init(mass: 1000),
material: PhysicsMaterialResource.generate(
staticFriction: 0.7,
dynamicFriction: 0.6,
restitution: 1.0 // Perfect bounce
),
mode: .static
)
entity.components.set(groundPhysics)
Wall Physics
// From WalledBoxManager.swift
let wallPhysics = PhysicsBodyComponent(
massProperties: .init(mass: 1000),
material: PhysicsMaterialResource.generate(
staticFriction: 0.7,
dynamicFriction: 0.6,
restitution: 0.85 // Slightly less than ground
),
mode: .static
)
wall.components.set(wallPhysics)
Collision Detection
// From GroundPlaneView.swift
content.subscribe(to: CollisionEvents.Began.self) { event in
guard physicsMode == .realityKit else { return }
let currentTime = Date().timeIntervalSince1970
guard currentTime - lastCollisionTime > 0.1 else { return }
if event.entityA.name == "bouncingBall" || event.entityB.name == "bouncingBall" {
let normal = event.collision.normal
// Distinguish between wall and ground collisions
if abs(normal.y) < 0.3 { // Wall bounce
print("Wall collision detected")
} else if normal.y > 0.7 { // Ground bounce
print("Ground collision detected")
}
lastCollisionTime = currentTime
}
}
Issues Observed
Energy Loss: Despite restitution = 1.0 (perfect elasticity), the ball loses ~20-30% energy per bounce
Wall Sliding: Ball tends to slide down walls instead of bouncing naturally
No Damping Control: Comments mention damping values but they don't seem to affect the physics
Change in mass also doesn't do much.
Custom Physics System (Workaround)
I've implemented a custom physics system that manually calculates velocities and applies more realistic restitution values:
// From BouncingBallComponent.swift
struct BouncingBallComponent: Component {
var velocity: SIMD3<Float> = .zero
var angularVelocity: SIMD3<Float> = .zero
var bounceState: BounceState = .idle
var lastBounceTime: TimeInterval = 0
var bounceCount: Int = 0
var peakHeight: Float = 0
var totalFallDistance: Float = 0
enum BounceState {
case idle
case falling
case justBounced
case bouncing
case settled
}
}
Is this energy loss expected behavior in RealityKit, even with perfect restitution (1.0)?
Are there additional physics parameters (damping, solver iterations, etc.) that could improve bounce behavior?
Would switching to Unity be necessary for more realistic ball physics, or am I missing something in RealityKit?
Even in the last video here: https://stepinto.vision/example-code/collisions-physics-physics-material/ bounce of the ball is very unnatural - stops after 3-4 bounces. I apply custom impulses, but then if I have walls around the ball, it's almost impossible to make it look natural. I also saw this post https://developer.apple.com/forums/thread/759422 and ball is still not bouncing naturally.
One of the most common ways to provide a window size in visionOS is to use the defaultSize scene modifier.
WindowGroup(id: "someID") {
SomeView()
}
.defaultSize(CGSize(width: 600, height: 600))
Starting in visionOS 26, using this has a side effect. visionOS 26 will restore windows that have been locked in place or snapped to surfaces. If a user has manually adjusted the size of a locked/snapped window, the users size is only restore in some cases.
Manual resize respected
Leaving a room and returning later
Taking the headset off and putting it back on later
Manual resize NOT respected
Device restart. In this case, the window is reopened where it was locked, but the size is set back to the values passed to defaultSize. The manual resizing adjustments the user has made are lost. This is counter to how all other windows and widgets work.
I reported this last month (FB18429638), but haven't heard back if this is a bug or intended behavior.
Questions
What is the best way to provide a default window size that will only be used when opening new windows–and not used during scene restoration?
Should we try to keep track of window size after users adjust them and save that somewhere?
If this is intended behavior, can someone please update the docs accordingly?
I've been using the MacOS XCode Reality Composer to export interactive .reality files that can be hosted on the web and linked to, triggering QuickLook to open the interactive AR experience.
That works really well.
I've just downloaded XCode 15 Beta which ships with the new Reality Composer Pro and I can't see a way to export to .reality files anymore. It seems that this is only for building apps that ship as native iOS etc apps, rather than that can be viewed in QuickLook.
Am I missing something, or is it no longer possible to export .reality files?
Thanks.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
QuickLook
Reality Composer
Reality Composer Pro
Hello,
I'm currently trying to make a collaborative app. But it just works only on Reality View, when I tried to use Compositor Layer like below, the personas disappeared.
ImmersiveSpace(id: "ImmersiveSpace-Metal") {
CompositorLayer(configuration: MetalLayerConfiguration()) { layerRenderer in
SpatialRenderer_InitAndRun(layerRenderer)
}
}
Is there any potential solution too see Personas in Metal view?
Thanks in advance!
I have discovered that RemoteImmersiveSpace is limited to utilizing the structure of the CompositorContent protocol, precluding direct invocation of RealityView. Consequently, I am interested in understanding the appropriate method for integrating CompositorContent within RemoteImmersiveSpace. Thanks.
Spatial widget is a new feature of visionos 26. I notice The system’s Photo app can add a Spatial Image in the widget. I wonder if third apps can use spatial image or any 3D content in it's widget? I try to use RealityView in widget and it run with a crash.
So does spatial Image in widget only supported by the system Photo app, and not available to developers now?
Is there any size guidance for the new WidgetKit integration on visionOS? The Widget HIG provides dimensions for all the widget size classes on iOS, iPadOS and watchOS, but has not been updated for visionOS.
https://developer.apple.com/design/human-interface-guidelines/widgets
My potential widget use case is image based, so I'm looking to better understand the optimal size, resolution etc I would need, particularly for the new visionOS specific extra large widget size.
I tried to show spatial photo on my application by swiftUI's Image but it just show flat version of it even I Use Vision Pro,
so, how can I show spatial photo to users,
does there any options for this?
I downloaded Xcode 16 and updated my macOS to 15, but I keep getting this error when trying to build the game in simulator or in device
[xrsimulator] Exception thrown: The operation couldn’t be completed. (realitytool.RKAssetsCompiler.RKAssetsCompilerError error 3.)
Hi Apple Team,
We noticed the following exciting changelog in the latest macOS 26 beta:
A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451)
However after trying this on the latest beta and running some tests we do not see any differences on objects with low textures such as single coloured surfaces. Is there anything we are missing? the machine is definitely connected to the internet but we have no way of knowing from the logs if the new model is being used?
thanks
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug.
After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere.
This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
I noticed in the latest macOS beta 3 that there was this update:
A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451)
I am not noticing any difference to before with the reconstructions I tested so I am assuming it's reverting to the old model but in the logs there is no way to see if it succeeds or fails to download that new model.
do you have any more information on what was improved here with some examples and what we should be looking for? also how can confirm the download of that new model has not failed?
Hey there,
since SceneView has been marked as „deprecated“ for SwiftUI, I‘m wondering which alternatives should be considered for the following situation:
I have a SwiftUI app (for iOS and iPadOS) where users can view (with rotate, scale, move gestures) 3D models (USDZ) in a scene. The models will be downloaded from web backend and called via local URL paths.
What I tested:
I‘ve tried ARView in .nonAR mode, RealityView, however I didn‘t get the expected response -> User can rotate, scale the 3D models in a virtual space.
ARView in nonAR mode still shows the object like in normal AR mode without camera stream.
I tried to add Gestures to the RealityView on iOS - loading USDZ 3D models worked but the gestures didn’t).
Model3D is only available for visionOS (that would be amazing to have it for iOS)
I also checked QuickLook Preview however it works pretty strange via Filepicker etc, which is not the way how the user should load the 3D models in my app.
Maybe I missed something, I couldn’t find anything which can help me. I‘m pretty much stucked adopting the latest and greatest frameworks/APIs in my App and taking the next steps porting my app to visionOS.
Long story short 😃:
Does someone have an idea what is the alternative to SceneView for USDZ 3D models?
I appreciate your support!!
Thanks in advance!
Hello, I am trying to develop an app that broadcasts what the user sees via Apple Vision Pro. I am a graduate student studying at the university.
And I have two problems,
If I want to use passthrough in screen capture (in VisionOS), do I have to join Apple Developer Enterprise Program to get Enterprise API?
and Can I buy Apple Developer Enterprise Program (Enterprise API) with my university account?
Have any of you been able to do this?
Thank you
Hello since updating to beta 3 the sculpting sample app doesn't work it crashes on running.
seems to be something in AnchorEntity or AccessoryAnchoringSource
Referenced from: <00B81486-1A74-30A0-B75B-4B39E3AF57DF> /private/var/containers/Bundle/Application/3D2EBF59-19F0-4BF4-8567-6962AA36A2C6/delete.app/delete.debug.dylib
Expected in: <BAA9B221-78A1-3B99-AA2F-B8DFCD179FC7> /System/Library/Frameworks/RealityFoundation.framework/RealityFoundation
Hi!
I'm currently experimenting on Apple Vision Pro with hand and head anchors. Is there a way to get an anchor linked to the apple magic keyboard (as the detection is already done to display inputs at the top)?
Thanks in advance,
Have a good day!