help me please i cant remove little messed up pixels and they are getting more and more please help me!!
2025 Macbook pro no protection screen actual screen
Reality Composer Pro
RSS for tagPrototype and produce content for AR experiences using Reality Composer Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Anyone could share ideas or nodes setup to implement a gaussian blur on shader graph material, with a blur size parameter? Thanks!
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
I am still not finding resources to know how to replace hands in a full immersive Space.... I reached the goal by creating a ARKit session that can detect the USDZ hand mesh Joints and connect to the hand-tracked-joints.... but I feel that's not the best solution... I really want to use the RealityKit potential to track and replace hands (with USDZ skinned ones) in an immersive environment, but the only resources I found are from November 2023.... :(
Can someone help me?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
I am trying to loop my videoMaterial. I have researched the AXQueuePlayer and AVPlayerLooper and tried to implement them into my code.
Please see attached.
There are no errors showing up but the videoMaterial is no longer working.
Please see the attached for the working code that plays the videoMaterial.
I am stumped can anyone help me solve this?
Thank you.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Is there any way to convert TextureResource to Image
After writing the code, when debugging on VisionPro, the program will encounter a blocking situation when running from Xcode to VisionPro. It will take a long time for the execution information to appear on the Xcode console
I have a question about Apple’s preinstalled visionOS app “Encounter Dinosaurs.”
In this app, the dinosaurs are displayed over the real-world background, but the PhysicallyBasedMaterial (PBM) in RealityKit doesn’t appear to respond to the actual brightness of the environment.
Even when I change the lighting in the room, the dinosaurs’ brightness and shading remain almost the same.
If this behavior is intentional — for example, if the app disables real-world lighting influence or uses a fixed lighting setup — could someone explain how and why it’s implemented that way?
In Reality Composer Pro, why is the Sky Sphere so much larger than the Sky Dome?
By my estimate, the Sky Sphere has a radius of 100m, while the Sky only has a radius of only 12m.
I’m currently developing a visionOS app that includes an RCP scene with a large USDZ file (around 2GB).
Each time I make adjustments to the CG model in Blender, I export it as USDZ again, place it in the RCP scene, and then build the app using Xcode.
However, because the USDZ file is quite large, the build process takes a long time, significantly slowing down my development speed.
For example, I’d like to know if there are any effective ways to:
Improve overall build performance
Reduce the time between updating the USDZ file and completing the build
Any advice or best practices for optimizing this workflow would be greatly appreciated.
Best regards,
Sadao
Hello!
Back from last week's amazing visit to Cupertino for the Game Dev session and diving back into Vision Pro experimentation.
I've exported a simple geometry nodes with animation test from Blender for use in RCP, with intended output to Vision Pro. I've attached a few screenshots showing the node setup and how it animates over time.
I select the Cube mesh and export as .usdc with animation. In the finder via quick look, I can actually see it working! If I try exporting as .usdz, however, i'm not seeing any animation in the finder preview.
Next, I import the .usdc file to RCP and add an Animation Library component to the cube mesh, but am not seeing any animation selectable, even though I see animation playing back in preview.
Next, I import the .usdc into Maya (via proper USD Stage pipeline - i'm learning to be USD compliant for authoring!) to verify if the animation is working, and it does.
What step(s) am I missing to get this working in Reality Composer Pro? My goal is to experiment with animating these geometry node instances - along with color animation if possible - over to Vision Pro for full scale, immersive presentation.
Of particular note, I am not a programmer, so I am trying my best to brute force this the only way I currently know possible, by keyframe animation and importing through Reality Composer Pro. I realize that, ideally, I should be learning how to leverage the code portion so I can start programatically controlling my 3d entities (with animation), but need more hand holding and real-world examples to help me get there. Thx!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
How can I create 180-degree apple immersive videos using game engine
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
I'm developing a custom gesture-based visionOS project that uses hand tracking with collision detection spheres on fingers to register user interactions through collision components. I'm experiencing a critical occlusion issue where collision detection spheres are intermittently occluded by the background/depth buffer, causing fingers to pass through the 3D model entities without registering interactions.
Detailed Description:
I have added 3D entities in an immersive scene with collision spheres attached to fingers for detecting user interactions.
Each sphere has:
CollisionComponent with sphere shape
Proper collision masks and groups configured
Real-time position updates from hand joint transforms
Each entity has:
InputTarget components to register collisions
The Issue:
When users move their fingers to the entity to interact, some collision spheres (particularly on the pinkie and ring fingers) become occluded and pass directly through the 3D model without triggering collision events.
Meanwhile, other fingers (like the index finger) continue to work correctly.
This appears to be a depth perception/z-buffer issue between the model entity and the hand tracking collision spheres
Questions:
Is there a recommended approach for maintaining consistent depth ordering between hand-tracking entities and 3D models in immersive spaces to prevent occlusion issues?
Should I be using AnchorEntities to anchor the entity to a plane or world position to establish a more stable depth reference?
Are there specific RenderingComponent or material settings that could help ensure collision entities maintain their depth priority and don't get occluded?
Could this be related to z-fighting when collision spheres and entity geometry occupy similar depth ranges? If so, what's the recommended depth bias approach?
Is there a better architectural approach for implementing interactions with custom hand gesture tracking that avoids these depth perception issues?
What Would Help:
Implementation guidance for ensuring reliable collision detection between hand-tracked entities through custom gestures and 3D models.
Best practices for depth management in immersive spaces with custom hand gesture tracking.
Sample code demonstrating stable hand-to-object interaction patterns.
Information about whether this is a known limitation or if there are specific APIs I should be leveraging
This issue is significantly impacting the reliability of our app experience, as users cannot consistently interact with all model components. Any guidance from Apple engineers or developers who have solved similar depth/occlusion challenges would be greatly appreciated.
Additional Context:
This is for a productivity-focused application where accuracy and reliability are critical.
Thank you for any assistance!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
ARKit
Reality Composer
AR / VR
visionOS
I exported some usd assets from IsaacSim but they are not showing up correctly on my Apple Vision Pro.
Even though the mesh looks to be the correct color in Finder and I can see the Diffuse Color looks correct, the object is still just gray. It should be green!
I have an arguably massive project and am not sure if the issue is with the assets or my approach in the code.
the error says : Tool terminated due to error "SIGNAL 6:Abort trap:6"
Basically I have around 15-20 assets (usda files built out of usdz files). In the code i am loading a scene with all the usda files and then have the functions to enable and disable a particular asset when needed.
This was working as intended when i am using dummy assets(with less polygons, lesser textures)
But when i placed the actual assets the error appears and persists. Do I have a bad approach of loading all the scenes at once?
Previously i have used an approach which loads the scenes when needed and that involved some lag before rendering the assets. But my current approach(when using dummies) works like a dime rendering and hiding the assets in realtime with no lag.
Kindly suggest any workarounds.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer
AR / VR
RealityKit
Reality Composer Pro
I am creating an Augmented Reality iOS (Not VisionOS) app using scenes created in Reality Composer Pro.
I'd like my code to send a notification to a RCP scene that plays a timeline. The RCP interface has the option to set up a behaviour for this purpose:
This Forum thread https://developer.apple.com/forums/thread/756978 suggests the code I need for sending a notification is:
name: NSNotification.Name("RealityKit.NotificationTrigger"),
object: nil,
userInfo: [
"RealityKit.NotificationTrigger.Scene": scene,
"RealityKit.NotificationTrigger.Identifier": "HideCharacter"
]
)
but the 'scene' var needs to point to the relevant RCP scene, which is loaded within a UIViewRepresentable ARView (because even in iOS26 it seems RealityKit/RealityViews aren't quite ready for AR use) and I can't work out how to correctly access it. Examples in the link above are for working with RealityKit and VisionOS only.
Code for loading the scene is as follows. How can I get the notification code above to be situated in a separate SwiftUI View and send the notification to the RCP scene?
typealias UIViewType = ARView
func makeUIView(context: Context) -> ARView {
// Create an ARView
let arView = ARView(frame: .zero)
// Configure it
let arConfiguration = ARWorldTrackingConfiguration()
arConfiguration.planeDetection = [.horizontal]
arView.session.run(arConfiguration)
// Load in Reality Composer Pro scene
let scene = try! Entity.load(named:"myScene)", in: realityKitContentBundle)
// Create a horizontal plane anchor
let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2)))
// Append the scene to the anchor
anchor.children.append(scene)
// Append the anchor to the ARView
arView.scene.anchors.append(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {
}
}
Reality Composer is no longer available in XCode 15 Release. Is this intended or will be available in later releases? Do I need to revert to Xcode15 Beta8 to get Reality Composer?
I made an animation in Blender using geometry nodes that I exported to USDC file (then I used Reality Converter to convert to USDZ) and I can see the animation when viewing from the finder but does not play after importing to RCP. Any idea how I can play the animation? Or can the animation be accessed through Xcode?
Thanks!
Is it possible to have a skydome which influences lighting in the scene but is otherwise invisible? in a raytracer that would be visible to secondary rays but invisible to primary rays.
Cheers, thanks.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro