Post

Replies

Boosts

Views

Activity

Reply to How To Move Camera with PS5 Game Controller
I've found this informative article: https://developer.apple.com/documentation/RealityKit/bringing-your-scenekit-projects-to-realitykit Which also clarifies the limitation with cameras in visionOS: https://developer.apple.com/documentation/RealityKit/bringing-your-scenekit-projects-to-realitykit#Configure-camera-management I will consult this moving forward.
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to How To Move Camera with PS5 Game Controller
https://youtu.be/6T_Q5EibU8U?si=sdsFsqyvliWSK3C5 Up way too early Saturday morning wanting to dive back in and see if I can further refine this test from last night. I almost lost the working ImmersiveView.swift file because ChatGPT starting making new suggestions and it borked everything, but fortunately, I was able to revert back to the known working version that I had uploaded for it to add code snippet. I wanted to see if I could get an HDRI setup and change some materials in RCP. I rewatched the Optimize your 3D Assets for Spatial Computing from WWDC 24, and that helped reiterate the limitations and suggestion for baked lighting. I just wanted to get the HDRI to affect my PBR materials, and learned about assigning an IMB Light Receiver to all objects. I then added a single directional light source with shadows. It's actually starting to look like something now. Next, I'd like to explore getting some of those objects animated. Both the robot and drummer toys have animated via Apple's Quick Look demo page, so I'll try that next. Would like to set something up where as I approach the object, the animation will play, then stop as I change proximity. Still have the issue with the camera rotation not updating with my initial body/head orientation, but don't want to break ImmersiveView.swift at the moment.
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to How To Move Camera with PS5 Game Controller
Hey - any of you Apple developers monitoring this thread - go check out this video. I "sort of" got it to work. And, yes, the frustration is real. I wish the tools were more artist friendly, so instead of pausing creativity to chase technical hurdles, I could just create and test in a fun and intuitive manner. sigh https://youtu.be/_NLNL_fTDqg
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to How To Move Camera with PS5 Game Controller
Let me know if: • The red sphere is visible • Movement now feels trackable • You want right-stick support for turning (rotation) • Or we need to tweak scale/speed/clipping/etc You’re close — this should lock it in. ME: Yes, I see the red sphere, and It moves forward, backwards, left and right. Still no actual camera movement though. AI: Excellent — that means everything except the viewpoint shifting is working. The red sphere is tied to the camera, and it’s moving… so your entity is moving — but your headset camera (what the user sees) is staying fixed. Here’s why: ❗ RealityView doesn’t use your custom camera for rendering RealityView always renders from the user’s head position (i.e., the Vision Pro headset) unless explicitly told to use another camera — and currently, Apple doesn’t expose a public API to set the “view camera” in a RealityView. ME: (To myself - are you KIDDING me, that we can't move a camera in a 3d scene, but the solution is to move the 3d scene around the camera? ARE YOU KIDDING ME?!) AI: ✅ Confirmed Facts: • You are moving the CameraRoot entity as expected (logs & sphere prove that). • The PerspectiveCamera is added, but RealityView ignores it at runtime. • The headset camera is still rendering from the user’s real-world head transform. ⸻ 🧠 So how do Vision Pro apps move the camera? You have two main options, depending on your goals: ⸻ Option A: Move the entire world instead of the camera Instead of moving a camera entity, move the root of your entire 3D scene in the opposite direction to simulate camera movement. // Instead of moving the camera, move your world root let delta = SIMD3(-move.x, 0, move.z) * 0.05 immersiveScene.position += delta This will feel like the camera is moving, even though it’s actually the scene shifting around the headset. ✅ Pro: Works today ❌ Con: Tricky if you’re interacting with world-anchored content ⸻ Option B: Use RealityView’s cameraTransform to apply translation (limited) RealityView gives you a cameraTransform in the update closure, but it’s read-only — it represents the user’s current position in space. There’s no documented way to override it or force rendering from a specific PerspectiveCamera entity. ⸻ ✅ Best Practical Solution (Right Now) Use Option A and move the immersive scene itself like so: Update your .update block: } update: { content in guard let immersiveScene = content.entities.first(where: { $0.name == "Immersive" }) else { print("❌ Immersive scene not found") return } let move = gamepadInput.move if move != .zero { let delta = SIMD3<Float>(-move.x, 0, move.z) * 0.05 immersiveScene.position += delta print("📦 Scene moved to simulate camera: \(immersiveScene.position)") } else { print("🟡 No movement this frame.") } } 🔚 Summary Problem You’re moving a camera entity, but RealityView ignores it. Solution Move the scene, not the camera. Result It will feel like you’re walking/flying through the space. Let me know if you want to combine movement with rotation (right stick) next — or if you’d rather try loading your scene into a custom anchor and move that instead. ME: So, I've wasted three days going in circles for something that is not even possible? Why, for the love of all that is 3D, can we not move a camera in 3d space? I CAN DO THIS IN UNREAL ENGINE.
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to How To Move Camera with PS5 Game Controller
(continued from above, as I hit the 7000 text limit) I kindly (HA) remind the AI that the code continues to crap out a the content.camera = camera line, with the error "Cannot assign value of type "PerspectiveCamera' to type 'RealityViewCamera'" and to confirm, with 100 percent certainty, that this is even doable. The response: Thank you for your patience — you’re absolutely right to demand clarity here. Let’s clear up the confusion once and for all. ⸻ ✅ The Short Answer: Yes, it is 100% doable to move the camera in an immersive RealityView using gamepad input — and you’re already doing it correctly with your most recent working code (which uses a PerspectiveCamera parented under a named entity like CameraRoot and moves that entity based on gamepad stick input). ⸻ ❌ The Misleading Part: content.camera = camera // ❌ ERROR: Cannot assign value of type 'PerspectiveCamera' to type 'RealityViewCamera' This line is not supported with PerspectiveCamera. It only works with a specific wrapper type used for dynamic camera control (RealityViewCamera) — which you are not creating manually, nor should you in this case. ⸻ 🧠 Behind the Scenes — RealityKit + RealityView Camera Behavior: • In RealityView, if you add a camera (e.g., PerspectiveCamera) into the scene graph, RealityKit will: • Automatically use the first enabled camera found in the hierarchy as the rendering camera. • This is true even if you don’t assign content.camera = .... • ✅ That’s why your current approach with CameraRoot + PerspectiveCamera works — RealityKit finds and uses your camera. 🧪 Confirming it Is Working Based on your last logs: 📍 CameraRoot moved to: SIMD3(0.06829529, 1.6, -0.32122958) You are definitely moving the camera root, and RealityKit is using the camera attached to that entity. If you don’t see the camera’s movement reflected visually in Vision Pro, it’s likely: 🚨 One of these issues: 1. The Immersive.usda scene has geometry or environment objects hiding your camera view, or is so large that 0.3 meters of movement isn’t visually noticeable. 2. The camera’s orientation doesn’t change, so even though it’s translating, the view doesn’t shift in a way that’s obvious. 3. The RealityKit scene is rendering correctly, but you’re expecting more dramatic motion (e.g., like flying through the space, not subtle walk-scale translations). 4. You’re still locked to the headset’s own view, which is common if no camera override is active (see workaround below). ✅ To Confirm It’s Really Moving: Try placing a visible object (like a glowing sphere) in front of your camera in code: let marker = ModelEntity(mesh: .generateSphere(radius: 0.05)) marker.position = [0, 0, -0.5] // 0.5m in front of the camera marker.model?.materials = [SimpleMaterial(color: .red, isMetallic: false)] camera.addChild(marker) f you see this sphere and it moves as you move the left stick, then you’ve fully confirmed everything works. ⸻ 🧭 TL;DR – Final Verdict • ✅ Yes, gamepad-based camera movement in a RealityView is 100% possible. • ❌ You should not use content.camera = camera with a PerspectiveCamera. That’s not compatible. • ✅ Your CameraRoot + PerspectiveCamera + input-driven transform method is the correct way. • If you don’t see motion, we can help debug by placing visual markers, or checking the scale/positioning of your imported .usda. Would you like a clean ImmersiveView.swift that includes the glowing marker for confirmation? Wow, I actually see a red sphere and it moves left, right, forward ad backwards with the left thumb stick (yay) but no actual camera movement.
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to How To Move Camera with PS5 Game Controller
Some updates (and you'll see how frustrating it is to really on the AI, which is supposed to be the solution for all mankind moving forward, right?) It suggests adding a camera directly into my scene in Reality Composer Pro, but this is not achievable at all. It then suggests to create it programmatically, but this also fails miserably. For your reading entertainment, here is the conversation after is asked to verify my .usda scene, and if it has a default camera node declared (which it doesn't, because, there is no way to add one) Thank you — I’ve inspected the Immersive.usda file, and here’s the key finding: 🚫 There is no camera (PerspectiveCamera or otherwise) defined in this USDA file. There are several Sphere, Cube, and Xform nodes, as well as a referenced Robot.usdz and VR_room_tmp.usdz, but no camera entity exists. ⸻ ✅ Solution Options 🅰️ Option A: Add Camera Programmatically (Recommended) You can dynamically add a camera in Swift code using RealityView and move it using your gamepad. Here’s the working approach you should use (I’ll give you a clean, corrected ImmersiveView.swift in a moment). ⸻ 🅱️ Option B: Add Camera in Reality Composer Pro (If you want it embedded in the USD) In Reality Composer Pro 2.0, here’s how to embed a camera: 1. Select the scene’s root node (likely named Root or VR_room_tmp). 2. Click the ”+” button at the bottom left. 3. Choose Transform — this creates an empty transform node. 4. Rename it to something like CameraRoot. 5. With that node selected, click “+” again and choose “Add Component” → Camera. 6. This will attach a Perspective Camera to that transform. 7. Use the gizmo to place it at [0, 1.6, 0] or whatever location you want as your start point. 8. Save the project and export the .usda. That will give your .usda a camera, which RealityKit can then use or override. ⸻ ✅ Here’s a Correct ImmersiveView.swift Implementation Assuming you want to control the camera position with a gamepad and programmatically add a camera, here’s the fixed ImmersiveView.swift: import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { @StateObject private var gamepadInput = GamepadInputHandler() var body: some View { RealityView { content in // Load immersive scene if let immersiveScene = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveScene) print("✅ Immersive scene loaded") } // Create and add camera root let cameraRoot = Entity() cameraRoot.name = "CameraRoot" cameraRoot.position = [0, 1.6, 0] let camera = PerspectiveCamera() cameraRoot.addChild(camera) content.add(cameraRoot) content.camera = camera print("🎥 Custom camera entity added at position \(cameraRoot.position)") } update: { content in guard let cameraRoot = content.entities.first(where: { $0.name == "CameraRoot" }) else { print("❌ CameraRoot not available yet") return } let move = gamepadInput.move if move != .zero { let delta = SIMD3<Float>(move.x, 0, -move.z) * 0.05 cameraRoot.position += delta print("📍 CameraRoot moved to: \(cameraRoot.position)") } else { print("🟡 No movement this frame.") } } } } ✅ This version: • Properly attaches a camera • Updates the cameraRoot position based on the gamepad input • Avoids trying to access RealityViewContent.cameraTransform (which does not exist) • Assigns the custom PerspectiveCamera to the RealityView.camera ⸻ Would you like help verifying your updated .usda if you embed the camera in Reality Composer Pro? You can re-export and upload it again — I’ll check for the camera node.  I THEN REPLIED THUSLY: There was an error with your code, but the Xcode AI (which, is already you, ChatGPT, right?) suggested this fix. And it says here what the code does right, BUT IT STILL DOES NOT WORK. Now, the fun part (we start going around in circles) 📦 A Note About RealityView.camera You may have seen earlier versions of this approach attempt to assign: content.camera = camera However, in the current Xcode 26.0 beta and visionOS 2.0 SDK, this can fail unless done very carefully (and RealityView.camera expects a RealityViewCamera, not a raw Entity). So you’re actually better off letting RealityKit pick up your camera via the scene graph, which your code now does correctly.
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to Collision Shape not work in Reality Composer
Trying to figure this out - imported the dice model into Reality Composer Pro, and of the three options, neither will work with this particular model. The next obvious question is, how do we assign a custom mesh (ideally, the original mesh, or a low-res equivalent) with collision component to the mesh with dynamics?
Topic: Graphics & Games SubTopic: RealityKit Tags:
Dec ’24
Reply to How do we author a "reality file" like the ones on Apple's Gallery?
I've been wanting to learn the same thing. I was hoping that we'd have access to the original Reality Composer file so we could study and learn from it. Unfortunately, there's no way to open a .reality file other than Quick Look to try and reverse engineer it. SIGH It would be super useful and highly educational if the original author of those three animation samples would publish a detailed tutorial on how they were created. 🤞
Topic: Spatial Computing SubTopic: ARKit Tags:
Nov ’24
Reply to How to create Lunar Rover USDZ Animated Sample File
Four months later and no actual response to this simple query? This should be an easy tutorial for someone to crank out, no? Specifically, I'm looking to learn how and where to author the button, line and callouts, then trigger the animations. Is this done in SceneKit, ARKit, Reality Composer Pro, or a combination? I like how this is all embedded in a portable USDZ file, which looks great in Vision OS. Surely, there is someone who knows how, or actually worked on these demo files that could provide the proper guidance? Thx!
Nov ’24
Reply to Export USDZ With Unlit Shader From Maya
Thx for the feedback! I should mention that I actually talked with one of my contacts at Substance 3d specifically about this. I am able to use multiple UDIMs, but keep as a single texture set. This way, I can paint over the seams of 14 hand-defined UV chunks, each with 4k baseColor maps with baked lighting. This enables me to re-project higher, 8K images from the drone and handheld Nikon footage, to make the ground texture that more detailed. I added to my previous RCP navigation->industry standard feedback. :) Fingers crossed!
Aug ’24
Reply to Export USDZ With Unlit Shader From Maya
Thank you for the quick reply! Regarding the UDIM workflow, do you have any suggestions for an alternative approach? I was told that the developers mentioned it would be no problem with multiple 4K maps, which these large environments need to ensure crisp textures (at least up close and within a 50-100 feet radius). I will look into the first 2 options presented immediately. Option 3 is good to be aware of, but I'd need to work with a programmer familiar enough to execute it. Side note while I have your attention - is there any way in RCP to combine camera Orbit, Dolly and Pan into a single command assessable via Option+Right/Middle/Left mouse click, exactly similar to how Maya works? This would be a huge improvement for quick navigation. :)
Aug ’24