Reality Composer Pro

RSS for tag

Leverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps

Posts under Reality Composer Pro tag

187 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Unable to Drag 3D Model(Entity) in visionOS when UITextView Is in the Background
I'm running into an issue in Xcode when working on a visionOS app. Whenever I try to drag a 3D model entity in my scene, the drag gesture doesn't work if there's a UITextView (or SwiftUI TextEditor) in background of the 3D entity. It seems like the UITextView is intercepting the gesture or preventing the drag interaction from reaching the 3D content. Interestingly, when the 3D entity is placed infront of the ScrollView, the drag works as expected. Has anyone else experienced this behavior? Is this a known limitation or a bug in the current tooling? Any workarounds or fixes would be appreciated. Thanks!
0
0
62
1d
How To Move Camera with PS5 Game Controller
Hello again! So, I am trying my best to figure out how to get a simple, fully Immersive scene that is navigable via the PS5 controller to work on Vision Pro. I am even using ChatGPT (both externally AND in Xcode when it makes fix recommendations) however, I seem to be moving around in circles, having to constantly send error messages back to ChatGPT, then try the new "confirmed to work" fixed code, only to have more errors. I also have explicitly let the AI know that I'm using all the current betas, only to learn there has been lots of stuff deprecated (of course!). This is my initial query to ChatGPT: "How can I create a fully immersive VR experience for Apple Vision Pro, using a PS5 game controller to navigate the user via first person view through a 3d scene?" And then down the rabbit hole the past few days. Seriously, I thought AI was supposed to be helpful! LOL Per Xcode logs, I'm seeing thumbstick movement, but simply cannot get the camera to move. Is this even possible?! Is it a camera defined in Reality Composer Pro? Or, does it need to be defined outside. I'm trying to follow along with the AI, but it seems to go back and forth with different solutions. Here is the latest it's spitting out: __ Thanks for this super clear log — and you’re absolutely right: the gamepad input is working, CameraRoot is moving as expected… but your actual camera view is not changing. This strongly suggests the issue is that RealityKit is not using your custom camera (PerspectiveCameraComponent) for rendering the view. ⸻ 🔍 What’s happening Even though CameraRoot is moving (with correct logs), RealityKit will default to its own internal camera unless: 1. You explicitly assign your camera entity to be the one used, or 2. Your camera is the only one and RealityKit picks it up automatically. But in visionOS 2.0 / Xcode 26, there seems to be a bug or behavior change where RealityKit no longer auto-uses your custom PerspectiveCameraComponent in RealityView unless it’s: • properly set with content.camera, or • part of a .usda/Reality Composer camera hierarchy that’s already active. Unfortunately: 🟥 RealityView.camera = .entity(cameraRoot) ❌ Is no longer allowed (we confirmed this breaks with a type error) ⸻ ✅ The Reliable Workaround: Attach your camera to the Immersive scene itself Since we’re loading a .usda scene (Reality Composer Pro), here’s the most reliable path forward: ✔️ Option A: Embed your camera into the Reality Composer Pro scene 1. Open your Reality Composer Pro scene 2. Add a Camera entity directly in your Immersive.usda scene 3. Name it something like "PlayerCamera" 4. Position it at [0, 1.6, 0] 5. Leave it selected as the default camera Then in your Swift code: • Find the camera entity with .findEntity(named:) • Move that entity instead of creating a new one. ⸻ ✔️ Option B: Force RealityKit to use your custom camera (via transform parenting) Since direct camera assignment is unavailable, we can use a trick: Move the whole RealityView scene via a parent entity with a camera. But let’s go with Option A, since you’ve already got a Reality Composer Pro scene. __ Can any developer over there can make a suggestion on how to move forward? I JUST want to create a sandbox template so I can start exploring 3d scenes created in Maya and sent over to headset. I'll deal with animation next, but come on, why is this so difficult to get working? I'm not a programmer, but have been trying to wrap my head around Xcode and SwiftUI. This needs to be much simpler. Or, you need to provide us creatives with better sample templates and non programmer speak on how to set this up properly. Ideally, you HIRE us 3d professionals to work side by side with the programmers to help make these tools useable - especially Reality Composer Pro. Seriously, I am making a concerted effort to use the native tools, even though I would love to be porting Unreal Engine scenes over. If anyone can help point me in the right direction, coming from a 3D Creator/Animator/Modeler perspective, I, and my fellow peers in the XR/AR/VR community would greatly appreciate it. Thank you.
8
0
580
23h
Portal crossing causes inconsistent lighting and visual artifacts between virtual and real spaces (visionOS 2.0)
Hello, I'm working with the new PortalComponent introduced in visionOS 2.0, and I've encountered some issues when transitioning entities between virtual and real-world spaces using crossingMode. Specifically: Lighting inconsistency: When CG content (ModelEntities with PhysicallyBasedMaterial) crosses the portal from virtual space into the real environment, the way light reflects on the objects changes noticeably. This causes a jarring visual effect, as the same material appears differently depending on the space it's in. Unnatural transition visuals: During the transition, the CG models often appear to "emerge from the wall," especially when crossing from virtual to real. This ruins the immersive illusion and feels visually unnatural. IBL adjustment attempts: I’ve tried adding an ImageBasedLightComponent to the world entity, and while it slightly improves the lighting consistency, the issue still remains to a noticeable degree. My goal is to create a seamless visual experience when CG entities cross between spaces, without sudden lighting shifts or immersion-breaking geometry reveals. Has anyone else experienced similar issues? Is there a recommended setup or workaround to better control lighting and visual fidelity when using crossingMode with portals in visionOS 2.0? Any guidance would be greatly appreciated. Thank you!
5
0
159
26m
How to Configure angularLimitInYZ for PhysicsSphericalJoint in RealityKit (Pendulum/Swing Behavior)
Hello RealityKit developers, I'm currently working on physics simulations in my visionOS app and am trying to adapt the concepts from the official sample Simulating physics joints in your RealityKit app. In the sample, a sphere is connected to the ceiling using a PhysicsRevoluteJoint to create a hinge-like simulation. I've successfully modified this setup to use a PhysicsSphericalJoint instead. The basic replacement works as expected: pin1 (attached to the sphere) rotates freely around pin0 (attached to the ceiling), much like a ball-and-socket joint should, removing all translational degrees of freedom. My challenge lies with the PhysicsSphericalJoint's angularLimitInYZ property. The documentation mentions that this property allows limiting the rotation around the Y and Z axes, defining an "elliptical cone shape around the x-axis of pin0." However, I'm struggling to understand how to specify these values to achieve a desired rotational limit. If I have a sphere that is currently capable of rotating 360 degrees around pin0 (like a free-spinning ball on a string), how would I use angularLimitInYZ to restrict its rotation to a certain height or angular range, preventing it from completing a full circle? Specifically, I'm trying to achieve a "swing" like behavior where the sphere oscillates back and forth but cannot rotate completely overhead or underfoot. What values or approach should I use for the angularLimitInYZ tuple to define such a restricted pendulum-like motion? Any insights, code examples, or explanations on how to properly configure angularLimitInYZ for this kind of behavior would be incredibly helpful! The following code is modified from the sample. extension MainView { func addPinsTo(ballEntity: Entity, attachmentEntity: Entity) throws { let hingeOrientation = simd_quatf(from: [1, 0, 0], to: [0, 0, 1]) let attachmentPin = attachmentEntity.pins.set( named: "attachment_hinge", position: .zero, orientation: hingeOrientation ) let relativeJointLocation = attachmentEntity.position( relativeTo: ballEntity ) let ballPin = ballEntity.pins.set( named: "ball_hinge", position: relativeJointLocation, orientation: hingeOrientation ) // Create a PhysicsSphericalJoint between the two pins. let revoluteJoint = PhysicsSphericalJoint(pin0: attachmentPin, pin1: ballPin) try revoluteJoint.addToSimulation() } } The following image is a screenshot of the operation when changing to PhysicsSphericalJoint. Thank you in advance for your assistance.
1
0
183
6d
Subject: Handling Z-Up Blender USDZ Models in RealityKit (visionOS) for Transform Updates
Hello everyone, I'm working on a visionOS application using RealityKit and am encountering a common coordinate system challenge when integrating 3D models created in Blender. My goal is to display and dynamically update the Transform (position, rotation, scale) of models created in Blender within RealityKit. The issue arises because Blender's default coordinate system is Z-up, and while exporting to USD/USDZ, I don't have a reliable "Y-up" export option that correctly reorients the model and its transform data for RealityKit's Y-up convention. This means I'm essentially exporting models with their "up" direction along the Z-axis. When I load these Z-up exported models into RealityKit, they are often oriented incorrectly. To then programmatically update their Transform (e.g., move them, rotate them based on game logic, or apply physics), I need to ensure that the Transform values I set align with RealityKit's Y-up system, even though the original model data was authored in a Z-up context. My questions are: What is the recommended transformation process (e.g., using simd_quatf or simd_float4x4) to convert a Transform that was conceptually defined in a Z-up coordinate system to RealityKit's Y-up coordinate system? Specifically, when I have a Transform (or its translation, rotation, scale components) from a Z-up context, how should I apply this to a RealityKit Entity so it appears and behaves correctly in a Y-up world? Are there any existing convenience APIs or helper functions within RealityKit, simd, or other Apple frameworks that simplify this Z-up to Y-up Transform conversion process? Or is a manual application of a transformation quaternion (e.g., simd_quatf(angle: -.pi / 2, axis: [1, 0, 0])) the standard approach? Any guidance, code examples, or best practices from those who have faced similar challenges would be greatly appreciated! Thank you.
1
1
236
6d
RealityKit and Reality Composer Pro aren't forward or backward compatible with each other
TL;DR: RealityKit and Reality Composer Pro aren't forward or backward compatible with each other, and the resulting error message is terse and unhelpful. (FB14828873) So far, I've been sticking with Xcode 16.4 for development and only using Xcode 26.0 beta experimentally. Yesterday, I used xcode-select to switch to Xcode 26.0 beta 3 to test it, but I forgot to switch back. Consequently, this morning I unintentionally used the future Reality Composer Pro (the version included with Xcode 26) to make a small change to a USD file. Now I realize that if I'm unlucky, it's possible Reality Composer Pro may have silently introduced a small change into the USD file that may make RealityKit fail to read the file on iOS 18 and visionOS 2, which in the past has resulted in hours of debugging to track down the source of the failure, often a single line in the USD file that RealityKit can't communicate to me other than with the error "the operation couldn't be completed". As an analogy, this situation is as if, during regular development (not involving Reality Composer Pro), Xcode didn't warn you about specific API version conflicts, but instead failed with a generic error message, without highlighting the line in your Swift file that was the source of the error.
1
0
352
1w
RealityKit not behaving as expected
This week, I developed a small multiplatform RealityKit project. I also created a demo scene in Reality Composer Pro. Afterward, I imported the local package into the project. Running the project on macOS works perfectly. However, when I tried to run it on my iPhone, I encountered a permission error indicating that it couldn’t read the package. This seems unusual to me because I assumed that the dependency is bundled into the binary file. In an attempt to resolve the issue, I pushed the RCP package to GitHub, hoping it would work. Fortunately, everything compiles successfully now, but the loading time is significantly long, and the animations don’t play on tap gestures. Could someone please help me identify the root cause of this problem?
1
0
291
2w
Why Large-Scale Model Scenes Cause Real Device Crashes
Is there any limitation in Vision Pro when loading scenes with large-scale models? ​Test Case: Asset: Composite USDA file containing ​10 individual models​ (total triangles count: ~4.2M) Simulator: Loads and renders correctly Real Device: Loads asset successfully but ​ failure during rendering phase: Environment abruptly dims System spontaneously reboots How can we resolve this issue? Below are excerpted logs preceding the crash: <<<< FigAudioSession(AV) >>>> audioSessionAVAudioSession_CopyMXSessionProperty signalled err=-19224 (kFigAudioSessionError_UnsupportedOperation) (getMXSessionProperty unsupported) at FigAudioSession_AVAudioSession.m:606 Attempted to add ornament: <MRUIPlatterOrnament: 0x10a658f00; _isInternal: YES; _displaceWindowChrome: NO; _canCaptureUI: NO; _isBeingRemoved: NO; contentAnchorPoint3D: "{0.5, 0.5, 0}"; position: <MRUIPlatterOrnamentRelativePosition: 0x105b68e70; anchorPoint: {0.5, 0.5, 1}>; rotation: "{{0, 0, 0}, 0}"; opacity: 1.000000; canFollowUser: YES; effectiveOffset: "{0, 0, 0}"; presentingViewController: 0x0; billboardingBehavior: 0x0; scalingBehavior: 0x0; relativeToParent: NO; nonHeritableDepthDisplacement: 0.000000; order: 0.000000; _window._determinedSize: {0, 0}; _window: (null)> to nil or non-supporting UIScene: <UIWindowScene: 0x10a8a0000; role: UISceneSessionRoleImmersiveSpaceApplication; persistentIdentifier: test.test:SFBSystemService-BA3A21A3-D1AB-42E2-8AF0-AE0AB83BE528; activationState: UISceneActivationStateUnattached>. No action taken. Failed to set dependencies on asset 2823930584475958382 because NetworkAssetManager does not have an asset entity for that id. apply fence tx failed (client=0x98490e18) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0xa86516e2) [0x10000003 (ipc/send) invalid destination port]
1
0
116
2w
ShaderGraphMaterial.getParameter(handle:) always return nil, is this expected behavior?
I've loaded a ShaderGraphMaterial from a RealityKit content bundle and I'm attempting to access the initial values of its parameters using getParameter(handle:), but this method appears to always return nil: let shaderGraphMaterial = try await ShaderGraphMaterial(named: "MyMaterial", from: "MyFile") let namedParameterValue = shaderGraphMaterial.getParameter(name: "myParameter") // This prints the value of the `myParameter` parameter, as expected. print("namedParameterValue = \(namedParameterValue)") let handle = ShaderGraphMaterial.parameterHandle(name: "myParameter") let handleParameterValue = shaderGraphMaterial.getParameter(handle: handle) // Expected behavior: prints the value of the `myParameter` parameter, as above. // Observed behavior: prints `nil`. print("handleParameterValue = \(handleParameterValue)") Is this expected behavior? Based on the documentation at https://developer.apple.com/documentation/realitykit/shadergraphmaterial/getparameter(handle:) I'd expect getParameter(handle:) to return the value of the parameter, just as getParameter(name:) does. I've tested this on iOS 18.5 and iOS 26.0 beta 2. Assuming this getParameter(handle:) works as designed, is the following ShaderGraphMaterial extension an appropriate workaround, or can you recommend a better approach? Thank you. public extension ShaderGraphMaterial { /// Reassigns the values of all named material parameters using the handle-based API. /// /// This works around an issue where, at least as of RealityKit 26.0 beta 2 and /// earlier, `getParameter(handle:)` will always return `nil` when used to read the /// initial value of a shader graph material parameter read using /// `ShaderGraphMaterial(named:from:in:)`, whereas `getParameter(name:)` will work /// as expected. private mutating func copyNamedParametersToHandles() { for parameterName in self.parameterNames { if let value = self.getParameter(name: parameterName) { let handle = ShaderGraphMaterial.parameterHandle(name: parameterName) do { try self.setParameter(handle: handle, value: value) } catch { assertionFailure("Cannot set parameter value") } } } } }
1
0
282
3w
Bug Report - Incorrect trackingAreaIdentifier in visionOS 26 Hover Effect Sample Code
Description: In the official visionOS 26 Hover Effect sample code project , I encountered an issue where the event.trackingAreaIdentifier returned by onSpatialEvent does not reset as expected. Steps to Reproduce: Select an object with trackingAreaID = 6 in the sample app. Look at a blank space (outside any tracking area) and perform a pinch gesture . Expected Behavior: The event.trackingAreaIdentifier should return 0 when interacting with a non-tracking area. Actual Behavior: The event.trackingAreaIdentifier still returns 6, even after restarting the app or killing the process. This persists regardless of where the pinch gesture is performed
3
0
216
2w
How to play blend shape animations or morph animations exported from blender in Vision Pro apps using Reality Kit
So I am exporting a .usdc file from blender that already has some morph animations. The animations play well in blender but when I export I cannot seem to play them in RealityKit or RCP. Entity.availableAnimations is an empty array. Not of the child objects in the entity hierarchy has an animation library component with it. Maybe I am exporting it wrong but I tried multiple combinations but doesn't seem to work. Here are my export settings in blender The original file I purchased is an FBX file that has the animation but when I try to directly get it in RealityConverter it doesn't seem to play animations.
2
0
72
3w
How to load a USDZ inside of a Reality Composer Pro package as ModelEntity in RealityKit ?
I need a MeshResource from ModelEntity to generate a box collider, but ModelEntity fails to load USDZ files from the Reality Composer Pro (RCP) bundle. This code works for loading an Entity: // Successfully loads as generic Entity previewEntity = try await Entity(named: fileName, in: realityKitContentBundle) But this fails when trying to load as ModelEntity: // Fails to load as ModelEntity modelEntity = try await ModelEntity(named: fileName, in: realityKitContentBundle) I found this thread mentioning: "You'll likely go from USDZ to Entity which contains a MeshResource when you load/init the USDZ file." But it doesn't explain ​how to actually extract the MeshResource. Could anyone advise: How to properly load USDZ files as ModelEntity from RCP bundles? How to extract MeshResource from a successfully loaded Entity? Alternative approaches to generate box colliders if direct extraction isn't possible? Sample code for extraction or workarounds would be greatly appreciated!
1
0
107
3w
Apple Vision Pro Developer Strap: Video Out and Recording Time Limit
Any way to extend the video recording time in Reality Composer Pro from 3:00 to any longer value, such as editing preferences in Terminal or other workaround? Is there any way to use the strap and a USB-C cable as a live video stream input source that would mirror to Quicktime or some other video capture tool? I am assuming there is no online documentation or user manual for the strap, but please correct me if I'm wrong. Thank you.
1
0
68
3w
Unwanted file changes in Reality Composer Pro project when using Git
Hello, I'm working on a visionOS project that uses Reality Composer Pro, and we are managing our project files with Git. We've noticed that simply opening and closing the Reality Composer Pro application consistently generates changes in the following files, even when no explicit modifications have been made by the developer: {ProjectName}/Packages/RealityKitContent/Package.realitycomposerpro/PluginData/*******/ShaderGraphEditorPluginID/ShaderGraphEditorPluginID {ProjectName}/Packages/RealityKitContent/Package.realitycomposerpro/WorkspaceData/SceneMetadataList.json Could you please clarify the purpose of these files? Why do they appear as modified when no direct changes are made from our end? More importantly, is it safe to add these files to our .gitignore to prevent them from being tracked by Git? We are concerned that ignoring these files might lead to unexpected issues or inconsistencies when other team members pull the latest changes, especially if these files contain critical project metadata or state that needs to be synchronized. Any insights or recommended best practices for managing Reality Composer Pro projects with Git would be greatly appreciated. Thank you for your time and assistance.
1
0
85
Jun ’25
How to trigger and timeline-control particle emitters in Reality Composer Pro (without Xcode or code)?
Hi, I’m currently using Reality Composer Pro on macOS, and I’m trying to work with particle emitters using only the built-in tools — no Xcode, no custom Swift code. Here’s what I’m trying to achieve: 1. Is it possible to trigger a particle emitter with a tap gesture, directly inside Reality Composer Pro? 2. Can I add a particle emitter to a timeline, so that it emits at a specific time during an animation or behavior sequence? 3. Most importantly, is there a way to precisely control emission behavior — such as when particles start or stop — using visual and intuitive controls, without scripting? My goal is to set up particle effects that are interactive and timeline-controlled, but managed entirely through Reality Composer Pro’s GUI, not through code. Any suggestions, tips, or examples would be very helpful. Thanks in advance!
1
0
54
Jun ’25
Reality Composer to Reality Composer Pro?
I sketched a idea for a project in Reality Composer on my iPad, thinking when I had a chance to sit down I would work it up in Xcode. However, when I got back to my computer, I discovered I cannot open a file created in Reality Composer (or the exported Reality file) in Reality Composer Pro. Am I missing something obvious here, because this seems like a huge oversight. If anyone, can let me know how to open a file created in Reality Composer in Reality Composer Pro, I would greatly appreciate it. Partly, because there seems to be objects available in Reality Composer that are not in Reality Composer Pro. Thanks Stan
2
1
86
Jun ’25
Feature Request: Support .reality File Export in Reality Composer Pro for Mac
I am an AR developer working on Apple Silicon Macs. Currently, Reality Composer Pro does not allow exporting .reality files, and Reality Composer (classic) is not available for Apple Silicon. This creates a gap in the workflow for ARKit/RealityKit developers who need interactive .reality files for use in Xcode projects. Having the ability to export .reality files directly from Reality Composer Pro on Mac would greatly streamline development and enable a fully native workflow on modern Macs. Alternatively, bringing Reality Composer (classic) to Apple Silicon would also resolve this issue. I have submitted this as a feature request via Feedback Assistant (FB17900386). I encourage others with similar needs to reply or submit feedback as well. Thank you!
4
1
122
2w
Reality Composer Pro "Create new scene in the project" doesn't work more than once
In Reality Composer Pro 2.0 (448.120.2), if I use the "Create new scene in the project" button more than once, this feature doesn't seem to work. The second time I use "Create new scene in the project", Reality Composer Pro displays a new empty USDA file in the project browser with the wrong icon (a yellow document icon instead of a 3D box icon). Also, it doesn't create the new scene's USDA file on disk or display the scene in entity hierarchy browser or create a new tab. Consequently, whenever I want to add more than one new scene to my project, I have to repeatedly quit and restart Reality Composer Pro. In my use of RCP just now, I had to quit and restart RCP six times to create seven new scenes. Is this a known issue? Repro steps: In a Reality Composer Pro project, create a new folder using the "add folder" icon button in the project browser. Inside the new folder, click the "Create new scene in the project" icon button. Click the "Create new scene in the project" icon button a second time. Expected behavior: A new USDA file is created on disk. The new USDA file's root entity appears in the entity hierarchy browser and a corresponding new tab is created. Observed behavior: The USDA file for this scene is not created on disk and it does not appear in the entity hierarchy browser in a new tab. In the project browser view, a yellow document icon appears and it does not appear to correspond to an actual USDA file. Thank you for any insight you can provide about this issue.
1
0
81
Jun ’25
white gap between objects in RealityView
I want to display a huge image in RealityView in 3d space on Vision Pro. of course instead of one giant file I'm using a lot of big images. to achieve this, I'm generating multiple planes exactly beside each others and put each image on them. although the planes are exactly beside each others but there is still a white gap between them.(image below) **Does anybody know how to fix this issue? **
0
0
98
May ’25
How to use CharacterControllerComponent.
I am trying to implement a ChacterControllerComponent using the following URL. https://developer.apple.com/documentation/realitykit/charactercontrollercomponent I have written sample code, but PhysicsSimulationEvents.WillSimulate is not executed and nothing happens. import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { let gravity: SIMD3<Float> = [0, -50, 0] let jumpSpeed: Float = 10 enum PlayerInput { case none, jump } @State private var testCharacter: Entity = Entity() @State private var myPlayerInput = PlayerInput.none var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) testCharacter = immersiveContentEntity.findEntity(named: "Capsule")! testCharacter.components.set(CharacterControllerComponent()) let _ = content.subscribe(to: PhysicsSimulationEvents.WillSimulate.self, on: testCharacter) { event in print("subscribe run") let deltaTime: Float = Float(event.deltaTime) var velocity: SIMD3<Float> = .zero var isOnGround: Bool = false // RealityKit automatically adds `CharacterControllerStateComponent` after moving the character for the first time. if let ccState = testCharacter.components[CharacterControllerStateComponent.self] { velocity = ccState.velocity isOnGround = ccState.isOnGround } if !isOnGround { // Gravity is a force, so you need to accumulate it for each frame. velocity += gravity * deltaTime } else if myPlayerInput == .jump { // Set the character's velocity directly to launch it in the air when the player jumps. velocity.y = jumpSpeed } testCharacter.moveCharacter(by: velocity * deltaTime, deltaTime: deltaTime, relativeTo: nil) { event in print("playerEntity collided with \(event.hitEntity.name)") } } } } } } The scene is loaded from RCP. It is simple, just a capsule on a pedestal. Do I need a separate code to run testCharacter from this state?
0
0
85
May ’25