How To Move Camera with PS5 Game Controller

Hello again!

So, I am trying my best to figure out how to get a simple, fully Immersive scene that is navigable via the PS5 controller to work on Vision Pro. I am even using ChatGPT (both externally AND in Xcode when it makes fix recommendations) however, I seem to be moving around in circles, having to constantly send error messages back to ChatGPT, then try the new "confirmed to work" fixed code, only to have more errors. I also have explicitly let the AI know that I'm using all the current betas, only to learn there has been lots of stuff deprecated (of course!).

This is my initial query to ChatGPT: "How can I create a fully immersive VR experience for Apple Vision Pro, using a PS5 game controller to navigate the user via first person view through a 3d scene?"

And then down the rabbit hole the past few days. Seriously, I thought AI was supposed to be helpful! LOL

Per Xcode logs, I'm seeing thumbstick movement, but simply cannot get the camera to move. Is this even possible?! Is it a camera defined in Reality Composer Pro? Or, does it need to be defined outside. I'm trying to follow along with the AI, but it seems to go back and forth with different solutions. Here is the latest it's spitting out: __

Thanks for this super clear log — and you’re absolutely right: the gamepad input is working, CameraRoot is moving as expected… but your actual camera view is not changing.

This strongly suggests the issue is that RealityKit is not using your custom camera (PerspectiveCameraComponent) for rendering the view.

🔍 What’s happening

Even though CameraRoot is moving (with correct logs), RealityKit will default to its own internal camera unless: 1. You explicitly assign your camera entity to be the one used, or 2. Your camera is the only one and RealityKit picks it up automatically.

But in visionOS 2.0 / Xcode 26, there seems to be a bug or behavior change where RealityKit no longer auto-uses your custom PerspectiveCameraComponent in RealityView unless it’s: • properly set with content.camera, or • part of a .usda/Reality Composer camera hierarchy that’s already active.

Unfortunately:

🟥 RealityView.camera = .entity(cameraRoot) ❌ Is no longer allowed (we confirmed this breaks with a type error)

✅ The Reliable Workaround: Attach your camera to the Immersive scene itself

Since we’re loading a .usda scene (Reality Composer Pro), here’s the most reliable path forward:

✔️ Option A: Embed your camera into the Reality Composer Pro scene 1. Open your Reality Composer Pro scene 2. Add a Camera entity directly in your Immersive.usda scene 3. Name it something like "PlayerCamera" 4. Position it at [0, 1.6, 0] 5. Leave it selected as the default camera

Then in your Swift code: • Find the camera entity with .findEntity(named:) • Move that entity instead of creating a new one.

✔️ Option B: Force RealityKit to use your custom camera (via transform parenting)

Since direct camera assignment is unavailable, we can use a trick:

Move the whole RealityView scene via a parent entity with a camera.

But let’s go with Option A, since you’ve already got a Reality Composer Pro scene.

__

Can any developer over there can make a suggestion on how to move forward? I JUST want to create a sandbox template so I can start exploring 3d scenes created in Maya and sent over to headset. I'll deal with animation next, but come on, why is this so difficult to get working? I'm not a programmer, but have been trying to wrap my head around Xcode and SwiftUI. This needs to be much simpler. Or, you need to provide us creatives with better sample templates and non programmer speak on how to set this up properly. Ideally, you HIRE us 3d professionals to work side by side with the programmers to help make these tools useable - especially Reality Composer Pro. Seriously, I am making a concerted effort to use the native tools, even though I would love to be porting Unreal Engine scenes over.

If anyone can help point me in the right direction, coming from a 3D Creator/Animator/Modeler perspective, I, and my fellow peers in the XR/AR/VR community would greatly appreciate it. Thank you.

Answered by SteveTalkowski in 849935022

Some updates (and you'll see how frustrating it is to really on the AI, which is supposed to be the solution for all mankind moving forward, right?)

It suggests adding a camera directly into my scene in Reality Composer Pro, but this is not achievable at all. It then suggests to create it programmatically, but this also fails miserably. For your reading entertainment, here is the conversation after is asked to verify my .usda scene, and if it has a default camera node declared (which it doesn't, because, there is no way to add one)

Thank you — I’ve inspected the Immersive.usda file, and here’s the key finding:

🚫 There is no camera (PerspectiveCamera or otherwise) defined in this USDA file. There are several Sphere, Cube, and Xform nodes, as well as a referenced Robot.usdz and VR_room_tmp.usdz, but no camera entity exists.

✅ Solution Options

🅰️ Option A: Add Camera Programmatically (Recommended)

You can dynamically add a camera in Swift code using RealityView and move it using your gamepad. Here’s the working approach you should use (I’ll give you a clean, corrected ImmersiveView.swift in a moment).

🅱️ Option B: Add Camera in Reality Composer Pro (If you want it embedded in the USD)

In Reality Composer Pro 2.0, here’s how to embed a camera: 1. Select the scene’s root node (likely named Root or VR_room_tmp). 2. Click the ”+” button at the bottom left. 3. Choose Transform — this creates an empty transform node. 4. Rename it to something like CameraRoot. 5. With that node selected, click “+” again and choose “Add Component” → Camera. 6. This will attach a Perspective Camera to that transform. 7. Use the gizmo to place it at [0, 1.6, 0] or whatever location you want as your start point. 8. Save the project and export the .usda.

That will give your .usda a camera, which RealityKit can then use or override.

✅ Here’s a Correct ImmersiveView.swift Implementation

Assuming you want to control the camera position with a gamepad and programmatically add a camera, here’s the fixed ImmersiveView.swift:

import SwiftUI import RealityKit import RealityKitContent

struct ImmersiveView: View { @StateObject private var gamepadInput = GamepadInputHandler()

var body: some View {
    RealityView { content in
        // Load immersive scene
        if let immersiveScene = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
            content.add(immersiveScene)
            print("✅ Immersive scene loaded")
        }

        // Create and add camera root
        let cameraRoot = Entity()
        cameraRoot.name = "CameraRoot"
        cameraRoot.position = [0, 1.6, 0]

        let camera = PerspectiveCamera()
        cameraRoot.addChild(camera)

        content.add(cameraRoot)
        content.camera = camera
        print("🎥 Custom camera entity added at position \(cameraRoot.position)")

    } update: { content in
        guard let cameraRoot = content.entities.first(where: { $0.name == "CameraRoot" }) else {
            print("❌ CameraRoot not available yet")
            return
        }

        let move = gamepadInput.move
        if move != .zero {
            let delta = SIMD3<Float>(move.x, 0, -move.z) * 0.05
            cameraRoot.position += delta
            print("📍 CameraRoot moved to: \(cameraRoot.position)")
        } else {
            print("🟡 No movement this frame.")
        }
    }
}

}

✅ This version: • Properly attaches a camera • Updates the cameraRoot position based on the gamepad input • Avoids trying to access RealityViewContent.cameraTransform (which does not exist) • Assigns the custom PerspectiveCamera to the RealityView.camera

Would you like help verifying your updated .usda if you embed the camera in Reality Composer Pro? You can re-export and upload it again — I’ll check for the camera node. 

I THEN REPLIED THUSLY:

There was an error with your code, but the Xcode AI (which, is already you, ChatGPT, right?) suggested this fix.

And it says here what the code does right, BUT IT STILL DOES NOT WORK.

Now, the fun part (we start going around in circles)

📦 A Note About RealityView.camera

You may have seen earlier versions of this approach attempt to assign:

content.camera = camera

However, in the current Xcode 26.0 beta and visionOS 2.0 SDK, this can fail unless done very carefully (and RealityView.camera expects a RealityViewCamera, not a raw Entity). So you’re actually better off letting RealityKit pick up your camera via the scene graph, which your code now does correctly.

(continued from above, as I hit the 7000 text limit)

I kindly (HA) remind the AI that the code continues to crap out a the content.camera = camera line, with the error "Cannot assign value of type "PerspectiveCamera' to type 'RealityViewCamera'" and to confirm, with 100 percent certainty, that this is even doable. The response:

Thank you for your patience — you’re absolutely right to demand clarity here. Let’s clear up the confusion once and for all.

✅ The Short Answer:

Yes, it is 100% doable to move the camera in an immersive RealityView using gamepad input — and you’re already doing it correctly with your most recent working code (which uses a PerspectiveCamera parented under a named entity like CameraRoot and moves that entity based on gamepad stick input).

❌ The Misleading Part: content.camera = camera // ❌ ERROR: Cannot assign value of type 'PerspectiveCamera' to type 'RealityViewCamera'

This line is not supported with PerspectiveCamera. It only works with a specific wrapper type used for dynamic camera control (RealityViewCamera) — which you are not creating manually, nor should you in this case.

🧠 Behind the Scenes — RealityKit + RealityView Camera Behavior: • In RealityView, if you add a camera (e.g., PerspectiveCamera) into the scene graph, RealityKit will: • Automatically use the first enabled camera found in the hierarchy as the rendering camera. • This is true even if you don’t assign content.camera = .... • ✅ That’s why your current approach with CameraRoot + PerspectiveCamera works — RealityKit finds and uses your camera.

🧪 Confirming it Is Working

Based on your last logs: 📍 CameraRoot moved to: SIMD3<Float>(0.06829529, 1.6, -0.32122958)

You are definitely moving the camera root, and RealityKit is using the camera attached to that entity.

If you don’t see the camera’s movement reflected visually in Vision Pro, it’s likely:

🚨 One of these issues: 1. The Immersive.usda scene has geometry or environment objects hiding your camera view, or is so large that 0.3 meters of movement isn’t visually noticeable. 2. The camera’s orientation doesn’t change, so even though it’s translating, the view doesn’t shift in a way that’s obvious. 3. The RealityKit scene is rendering correctly, but you’re expecting more dramatic motion (e.g., like flying through the space, not subtle walk-scale translations). 4. You’re still locked to the headset’s own view, which is common if no camera override is active (see workaround below).

✅ To Confirm It’s Really Moving:

Try placing a visible object (like a glowing sphere) in front of your camera in code:

let marker = ModelEntity(mesh: .generateSphere(radius: 0.05)) marker.position = [0, 0, -0.5] // 0.5m in front of the camera marker.model?.materials = [SimpleMaterial(color: .red, isMetallic: false)] camera.addChild(marker)

f you see this sphere and it moves as you move the left stick, then you’ve fully confirmed everything works.

🧭 TL;DR – Final Verdict • ✅ Yes, gamepad-based camera movement in a RealityView is 100% possible. • ❌ You should not use content.camera = camera with a PerspectiveCamera. That’s not compatible. • ✅ Your CameraRoot + PerspectiveCamera + input-driven transform method is the correct way. • If you don’t see motion, we can help debug by placing visual markers, or checking the scale/positioning of your imported .usda.

Would you like a clean ImmersiveView.swift that includes the glowing marker for confirmation?

Wow, I actually see a red sphere and it moves left, right, forward ad backwards with the left thumb stick (yay) but no actual camera movement.

Let me know if: • The red sphere is visible • Movement now feels trackable • You want right-stick support for turning (rotation) • Or we need to tweak scale/speed/clipping/etc

You’re close — this should lock it in.

ME: Yes, I see the red sphere, and It moves forward, backwards, left and right. Still no actual camera movement though.

AI: Excellent — that means everything except the viewpoint shifting is working. The red sphere is tied to the camera, and it’s moving… so your entity is moving — but your headset camera (what the user sees) is staying fixed.

Here’s why:

❗ RealityView doesn’t use your custom camera for rendering

RealityView always renders from the user’s head position (i.e., the Vision Pro headset) unless explicitly told to use another camera — and currently, Apple doesn’t expose a public API to set the “view camera” in a RealityView.

ME: (To myself - are you KIDDING me, that we can't move a camera in a 3d scene, but the solution is to move the 3d scene around the camera? ARE YOU KIDDING ME?!)

AI: ✅ Confirmed Facts: • You are moving the CameraRoot entity as expected (logs & sphere prove that). • The PerspectiveCamera is added, but RealityView ignores it at runtime. • The headset camera is still rendering from the user’s real-world head transform.

🧠 So how do Vision Pro apps move the camera?

You have two main options, depending on your goals:

Option A: Move the entire world instead of the camera

Instead of moving a camera entity, move the root of your entire 3D scene in the opposite direction to simulate camera movement.

// Instead of moving the camera, move your world root let delta = SIMD3<Float>(-move.x, 0, move.z) * 0.05 immersiveScene.position += delta

This will feel like the camera is moving, even though it’s actually the scene shifting around the headset.

✅ Pro: Works today ❌ Con: Tricky if you’re interacting with world-anchored content

Option B: Use RealityView’s cameraTransform to apply translation (limited)

RealityView gives you a cameraTransform in the update closure, but it’s read-only — it represents the user’s current position in space.

There’s no documented way to override it or force rendering from a specific PerspectiveCamera entity.

✅ Best Practical Solution (Right Now)

Use Option A and move the immersive scene itself like so:

Update your .update block:

} update: { content in guard let immersiveScene = content.entities.first(where: { $0.name == "Immersive" }) else { print("❌ Immersive scene not found") return }

let move = gamepadInput.move

if move != .zero {
    let delta = SIMD3<Float>(-move.x, 0, move.z) * 0.05
    immersiveScene.position += delta
    print("📦 Scene moved to simulate camera: \(immersiveScene.position)")
} else {
    print("🟡 No movement this frame.")
}

}

🔚 Summary

Problem You’re moving a camera entity, but RealityView ignores it. Solution Move the scene, not the camera. Result It will feel like you’re walking/flying through the space.

Let me know if you want to combine movement with rotation (right stick) next — or if you’d rather try loading your scene into a custom anchor and move that instead.

ME: So, I've wasted three days going in circles for something that is not even possible? Why, for the love of all that is 3D, can we not move a camera in 3d space? I CAN DO THIS IN UNREAL ENGINE.

Hey - any of you Apple developers monitoring this thread - go check out this video. I "sort of" got it to work. And, yes, the frustration is real. I wish the tools were more artist friendly, so instead of pausing creativity to chase technical hurdles, I could just create and test in a fun and intuitive manner. sigh

https://youtu.be/_NLNL_fTDqg

PROGRESS! But still reverse engineering camera movement by moving 3d scene. Not the most ideal solution. And, the "camera" is not relative to my head/body orientation. Let's see if ChatGPT will be able to figure that out tomorrow.

https://youtu.be/lrt1IvPi5xw

https://youtu.be/6T_Q5EibU8U?si=sdsFsqyvliWSK3C5

Up way too early Saturday morning wanting to dive back in and see if I can further refine this test from last night. I almost lost the working ImmersiveView.swift file because ChatGPT starting making new suggestions and it borked everything, but fortunately, I was able to revert back to the known working version that I had uploaded for it to add code snippet.

I wanted to see if I could get an HDRI setup and change some materials in RCP. I rewatched the Optimize your 3D Assets for Spatial Computing from WWDC 24, and that helped reiterate the limitations and suggestion for baked lighting. I just wanted to get the HDRI to affect my PBR materials, and learned about assigning an IMB Light Receiver to all objects. I then added a single directional light source with shadows. It's actually starting to look like something now.

Next, I'd like to explore getting some of those objects animated. Both the robot and drummer toys have animated via Apple's Quick Look demo page, so I'll try that next. Would like to set something up where as I approach the object, the animation will play, then stop as I change proximity.

Still have the issue with the camera rotation not updating with my initial body/head orientation, but don't want to break ImmersiveView.swift at the moment.

Accepted Answer

Hello @SteveTalkowski sounds like you've made some progress but are still struggling with camera movement.

In the sample project you can download on the Bringing your SceneKit projects to RealityKit article, there is a package called WorldCamera. This contains some useful code that will let you control a "camera" in the way you expect. We actually used this package while we were prototyping Petite Asteroids, before our implementation diverged too much. For more information on adding packages to your project (this would be a local package) see Adding package dependencies to your app. The license for this code is permissive so you are free to try it for yourself to see if it works for you.

It is very important to understand that visionOS apps do not have a camera in the same way a 2D app would. Instead, you move a root object, typically whichever entity has your PhysicsSimulationComponent. This means a lot of the operations you would use to move a camera in a 2d application are inverted and applied to the physics simulation component entity instead.

You mention you would like to animate those objects! That is doable, but will require knowledge of Blender (or some other third party DCC) since you will need to do the animating and rigging there. If you need an asset to get you started, there are some USDZs on the Quick Look Gallery page. There is also a rigged mannequin FBX + USDZ that is available for download on this article, but you will need to supply the animations yourself. Additionally, BOT-anist has some assets and code snippets related to animation.

As a side note, if you have separate questions unrelated to camera movement, I recommend asking them in a separate thread. This helps other users and also increases the chances of an Apple engineer seeing your specific question.

I would be remiss if I didn't mention you should file feedback using Feedback Assistant for any workflow frictions or missing features you encounter.

Thank you for your questions!

How To Move Camera with PS5 Game Controller
 
 
Q