After adding TextComponents to my Entities on visionOS, I have observed that visualBounds will ignore the TextComponents.
Documentation states that it should render a rounded rectangle mesh. These mashes are visible on the device, but not visible in the debugger ("Capture Entity Hierarchy") and ignored by visualBounds.
Am I missing something?
static func makeDirection(_ direction: Direction) -> Entity {
let text = Entity()
text.name = direction.rawValue
text.setScale(SIMD3(repeating: 5), relativeTo: nil)
text.transform.rotation = direction.rotation
text.components.set(direction.textComponent)
return text
}
My workaround is to add a disabled ModelEntity and take its bounds 😬
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
We're trying to switch from using main camera access on Arkit to screen-capture with passthrough however we're facing some issues and it seems a bit complicated to debug.
We have set up a broadcast Extension, set up some logs on the sample Handler but we get nothing in the console nor that the recording starts, we set up the picker as well and we can see our extension in the control center as one of the choices but clicking start, results in it stopping in less than one second after.
The only message that is rather contradictory we see in the console.app is the following
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license
and just right after
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
https://developer.apple.com/documentation/realitykit/videomaterial
The documentation: "Video materials support transparency if the source video’s file format also supports transparency."
I have a transparency video(Hand.mov, HEVC with alpha), I can show the video with transparency background correctly on Vision Pro Simulates, but on physic Device the video has a black background. I'm sure the video format is ok because I can see get the texture from video and display it on an UnlitMaterial.
How can I show the transparency video correctly with the RealityKit/VideoMaterial?
Is there any interest in this forum for those developing for the spatial web and safari. I can't seem to find any posts that are relevant here.
I'm developing a VisionOS app with bouncing ball physics and struggling to achieve natural bouncing behavior using RealityKit's physics system. Despite following Apple's recommended parameters, the ball loses significant energy on each bounce and doesn't behave like a real basketball, tennis ball, or football would.
With identical physics parameters (restitution = 1.0), RealityKit shows significant energy loss. I've had to implement a custom physics system to compensate, but I want to use native RealityKit physics. It's impossible to make it work by applying custom impulses.
Ball Physics Setup (Following Apple Forum Recommendations)
// From PhysicsManager.swift
private func createBallEntityRealityKit() -> Entity {
let ballRadius: Float = 0.05
let ballEntity = Entity()
ballEntity.name = "bouncingBall"
// Mesh and material
let mesh = MeshResource.generateSphere(radius: ballRadius)
var material = PhysicallyBasedMaterial()
material.baseColor = .init(tint: .cyan)
material.roughness = .float(0.3)
material.metallic = .float(0.8)
ballEntity.components.set(ModelComponent(mesh: mesh, materials: [material]))
// Physics setup from Apple Developer Forums
let physics = PhysicsBodyComponent(
massProperties: .init(mass: 0.624), // Seems too heavy for 5cm ball
material: PhysicsMaterialResource.generate(
staticFriction: 0.8,
dynamicFriction: 0.6,
restitution: 1.0 // Perfect elasticity, yet still loses energy
),
mode: .dynamic
)
ballEntity.components.set(physics)
ballEntity.components.set(PhysicsMotionComponent())
// Collision setup
let collisionShape = ShapeResource.generateSphere(radius: ballRadius)
ballEntity.components.set(CollisionComponent(shapes: [collisionShape]))
return ballEntity
}
Ground Plane Physics
// From GroundPlaneView.swift
let groundPhysics = PhysicsBodyComponent(
massProperties: .init(mass: 1000),
material: PhysicsMaterialResource.generate(
staticFriction: 0.7,
dynamicFriction: 0.6,
restitution: 1.0 // Perfect bounce
),
mode: .static
)
entity.components.set(groundPhysics)
Wall Physics
// From WalledBoxManager.swift
let wallPhysics = PhysicsBodyComponent(
massProperties: .init(mass: 1000),
material: PhysicsMaterialResource.generate(
staticFriction: 0.7,
dynamicFriction: 0.6,
restitution: 0.85 // Slightly less than ground
),
mode: .static
)
wall.components.set(wallPhysics)
Collision Detection
// From GroundPlaneView.swift
content.subscribe(to: CollisionEvents.Began.self) { event in
guard physicsMode == .realityKit else { return }
let currentTime = Date().timeIntervalSince1970
guard currentTime - lastCollisionTime > 0.1 else { return }
if event.entityA.name == "bouncingBall" || event.entityB.name == "bouncingBall" {
let normal = event.collision.normal
// Distinguish between wall and ground collisions
if abs(normal.y) < 0.3 { // Wall bounce
print("Wall collision detected")
} else if normal.y > 0.7 { // Ground bounce
print("Ground collision detected")
}
lastCollisionTime = currentTime
}
}
Issues Observed
Energy Loss: Despite restitution = 1.0 (perfect elasticity), the ball loses ~20-30% energy per bounce
Wall Sliding: Ball tends to slide down walls instead of bouncing naturally
No Damping Control: Comments mention damping values but they don't seem to affect the physics
Change in mass also doesn't do much.
Custom Physics System (Workaround)
I've implemented a custom physics system that manually calculates velocities and applies more realistic restitution values:
// From BouncingBallComponent.swift
struct BouncingBallComponent: Component {
var velocity: SIMD3<Float> = .zero
var angularVelocity: SIMD3<Float> = .zero
var bounceState: BounceState = .idle
var lastBounceTime: TimeInterval = 0
var bounceCount: Int = 0
var peakHeight: Float = 0
var totalFallDistance: Float = 0
enum BounceState {
case idle
case falling
case justBounced
case bouncing
case settled
}
}
Is this energy loss expected behavior in RealityKit, even with perfect restitution (1.0)?
Are there additional physics parameters (damping, solver iterations, etc.) that could improve bounce behavior?
Would switching to Unity be necessary for more realistic ball physics, or am I missing something in RealityKit?
Even in the last video here: https://stepinto.vision/example-code/collisions-physics-physics-material/ bounce of the ball is very unnatural - stops after 3-4 bounces. I apply custom impulses, but then if I have walls around the ball, it's almost impossible to make it look natural. I also saw this post https://developer.apple.com/forums/thread/759422 and ball is still not bouncing naturally.
I'm getting the following error message when compiling the Apple provided sample, Spaceship game for the Apple Visio Pro. I've already tried deleting the derived data resetting the package cache and restarting Xcode but still getting the following error: [xrsimulator] Exception thrown during compile: Cannot get rkassets content for path /Users/myoungkang/Downloads/CreatingASpaceshipGame/Packages/Studio/Sources/Studio/Studio.rkassets because 'The file “Studio.rkassets” couldn’t be opened because you don’t have permission to view it.'
error: Tool exited with code 1
Topic:
Spatial Computing
SubTopic:
General
For the M2 Apple Vision Pro, there's "a general guideline, we recommend no more than 500 thousand triangles for an immersive scene, with 250 thousand for applications in the shared space." --https://developer.apple.com/videos/play/wwdc2024/10186/?time=147
Is there a revised recommendation for the M5 Apple Vision Pro?
Using the example code posted here:
https://developer.apple.com/documentation/visionOS/tracking-images-in-3d-space
I can register multiple ReferenceImage s with a ImageTrackingProvider, but only one updates at a time - to have realtime updating, I can only have one ImageAnchor in my field of view at a time.
Is it possible to track multiple imageAnchors at the same time in the same field of view? As in having several ImageAnchor's tracked and entities updated to the transforms of the anchor in the same frame/moment from the Apple Vision Pro?
Topic:
Spatial Computing
SubTopic:
ARKit
Hi guys,
I noticed that Apple created a really engaging visual effect for browsing spatial videos in the app. The video appears embedded in glass panel with glowing edges and even shows a parallax effect as you move around. When I tried to display the stereo video using RealityView, however, the video entity always floats above the panel.
May I ask how does VisionOS implement this effect? Is there any approach to achieve this effect or example code I can use in my own code.
Thanks!
I'm trying to run a PhotogrammetrySession based on photos taken in an AVCaptureSession and stored as .heic files.
When I load the files I'm always seeing the error "Sample 0 missing LiDAR point cloud!" showing up for each individual sample.
Debugging shows that sample.depthDataMap is populated, also the .heic contains depth data which can be extracted using e.g. heif-convert on my Mac.
Comparing the .heic I created to one of the ObjectCaptureSession which doesn't show the LiDAR warning, I noticed the only difference being the HEIC information here:
So my questions are:
Are these the missing information in my manual capture causing this warning?
Can I somehow add these information in an AVCaptureSession?
Do these information allow better photogrammetry results?
Game Controller Input Limitations in visionOS Volumetric Windows
Hello Apple Developer Community,
I'm developing a game for visionOS and have encountered significant limitations with game controller input when using volumetric windows (WindowGroup with .volumetric style). I'd appreciate clarification on whether this is expected behavior and any guidance on best practices.
🧩 Issue Summary
When using a DualSense controller with a volumetric window in visionOS, only a subset of controller inputs are available to the app. The remaining inputs appear to be reserved by the system for UI navigation.
✅ Working Inputs (Volumetric Window)
D-Pad (all directions)
L3 (left thumbstick button click)
R3 (right thumbstick button click)
Menu button
Options button
❌ Not Working Inputs (Volumetric Window)
Left thumbstick analog movement (used for UI scrolling instead)
Right thumbstick analog movement (used for UI scrolling instead)
Face buttons (Cross, Circle, Square, Triangle / A, B, X, Y)
Shoulder buttons (L1, R1)
Triggers (L2, R2)
Key observation: When moving the left thumbstick in a volumetric window, the window's UI scrolls vertically instead of sending input to my app's GameController handlers. Similarly, face buttons seem to be reserved for system UI interactions.
⚙️ Implementation Details
I'm using the standard GameController framework:
Connect to controller via GCController.controllers()
Access extendedGamepad profile
Set up valueChangedHandler and pressedChangedHandler for all inputs
Handlers confirmed registered via logging
Working inputs (D-Pad, L3, R3) trigger immediately and consistently
Non-working inputs (thumbsticks, face buttons) never trigger
🧠 Critical Finding: ImmersiveSpace Works Perfectly
When testing the exact same code in an ImmersiveSpace (.mixed immersion style), all controller inputs work perfectly:
✅ Both thumbsticks provide full analog input
✅ All face buttons trigger their handlers
✅ All shoulder buttons and triggers work correctly
✅ 100% success rate with no intermittent issues
This suggests the issue isn't with my code, but rather how visionOS handles controller input differently between Volumetric Windows and ImmersiveSpace.
🧪 Test Environment
I created a minimal test project (Controller-Playground) to isolate the issue:
A simple ControllerTester class that registers all GameController handlers
A visual UI showing real-time input state
No game logic, RealityKit physics, or other complexity
Results
In volumetric window: Only D-Pad, L3, R3, Menu, Options work
In ImmersiveSpace: All inputs work perfectly
This confirms the limitation exists at the visionOS platform level, not in app code.
🧰 Attempted Workarounds
I tried the following without success:
Setting GCSupportsControllerUserInteraction = false in Info.plist
Setting UIRequiresFullScreen = true
Changing window styles (.plain, .volumetric)
Polling vs. handler-based input approaches
Various threading models (MainActor, separate thread)
Result: The only way to enable full controller support is to switch to ImmersiveSpace.
❓ Questions for Apple
Is this input reservation behavior in volumetric windows intended and documented?
Are game controllers expected to have limited functionality in volumetric windows while full functionality is reserved for ImmersiveSpace?
Is there a way to request full controller input access in a volumetric window, or is ImmersiveSpace the only option for complete controller support?
Where can I find official documentation about controller input differences between window types?
Are there any APIs or configuration options to disable system controller shortcuts in volumetric windows?
🎯 Impact
This limitation has a significant effect on game design and architecture:
Volumetric windows offer a multitasking-friendly, less immersive experience
ImmersiveSpace provides full controller support but may be more immersive than some games require
Games that only need basic D-Pad and button input can work fine in volumetric windows
Games requiring analog sticks or face buttons must currently use ImmersiveSpace
It would be very helpful if Apple could clarify or reference existing documentation regarding controller input handling in different visionOS window types. If such documentation doesn't exist yet, it might be valuable to include this information in future developer guides or best-practice documents.
🕹 Current Workaround
For now, I'm using:
D-Pad for character movement (digital 8-direction)
R3 (right stick click) as a substitute for the "X" button
This setup allows the game to function within a volumetric window, though full controller support still requires ImmersiveSpace.
📄 Request
If this is expected behavior, I may have simply missed the relevant documentation — could you please point me to any existing resources that explain this design?
If there isn't one yet, it would be great if future visionOS documentation could:
Clearly outline controller input behavior across window types
Provide guidance on when to use Volumetric Windows vs. ImmersiveSpace for games
Consider adding an API option to request full controller access when appropriate
If this is not expected behavior, I'm happy to file a detailed bug report with sample code.
💻 System Information
visionOS: Latest Simulator
Xcode: Latest version
Controller: Sony DualSense
Framework: GameController (standard extendedGamepad profile)
Test project: Minimal reproducible example available
Thank you for any clarification or guidance you can provide. This information would be valuable for many developers working on visionOS games.
For visionOS 2.0+, it has been announced the object tracking feature. Is there any support for PolySpatial in Unity or is it only available in Swift and Xcode?
With iOS26 unveiled, has anyone noticed or found any changes related to RoomPlan?
I can't find anything myself, which is disappointing.
Has anyone found any improvements or changes?
I'm using ARKitSession and PlaneDetectionProvider to detect planes. I have a basics process to create an entity for each detected plane. Each one will get a random color for the material.
Each plane is sized based on the bounds of the anchor provided by ARKit.
let mesh = MeshResource.generatePlane(
width: anchor.geometry.extent.width,
depth: anchor.geometry.extent.height
)
Then I'm using this to position each entity.
entity.transform = Transform(matrix: anchor.originFromAnchorTransform)
This seems to be the right method, but many (not all) planes are not where they should be. The sizes look OK, but the X and Y positions off.
Take this large green plane on the wall. It should span the entire wall, but it is offset along the X position so that it is pushed to the left from where the center of the anchor is.
When I visualize surfaces using the Xcode debugging tools, that tool reports the planes where I'd expect them to be.
Can you see what I'm getting wrong here? Full code below
struct Example068: View {
@State var session = ARKitSession()
@State private var planeAnchors: [UUID: Entity] = [:]
@State private var planeColors: [UUID: Color] = [:]
var body: some View {
RealityView { content in
} update: { content in
for (_, entity) in planeAnchors {
if !content.entities.contains(entity) {
content.add(entity)
}
}
}
.task {
try! await setupAndRunPlaneDetection()
}
}
func setupAndRunPlaneDetection() async throws {
let planeData = PlaneDetectionProvider(alignments: [.horizontal, .vertical, .slanted])
if PlaneDetectionProvider.isSupported {
do {
try await session.run([planeData])
for await update in planeData.anchorUpdates {
switch update.event {
case .added, .updated:
let anchor = update.anchor
if planeColors[anchor.id] == nil {
planeColors[anchor.id] = generatePastelColor()
}
let planeEntity = createPlaneEntity(for: anchor, color: planeColors[anchor.id]!)
planeAnchors[anchor.id] = planeEntity
case .removed:
let anchor = update.anchor
planeAnchors.removeValue(forKey: anchor.id)
planeColors.removeValue(forKey: anchor.id)
}
}
} catch {
print("ARKit session error \(error)")
}
}
}
private func generatePastelColor() -> Color {
let hue = Double.random(in: 0...1)
let saturation = Double.random(in: 0.2...0.4)
let brightness = Double.random(in: 0.8...1.0)
return Color(hue: hue, saturation: saturation, brightness: brightness)
}
private func createPlaneEntity(for anchor: PlaneAnchor, color: Color) -> Entity {
let mesh = MeshResource.generatePlane(
width: anchor.geometry.extent.width,
depth: anchor.geometry.extent.height
)
var material = PhysicallyBasedMaterial()
material.baseColor.tint = UIColor(color)
let entity = ModelEntity(mesh: mesh, materials: [material])
entity.transform = Transform(matrix: anchor.originFromAnchorTransform)
return entity
}
}
Topic:
Spatial Computing
SubTopic:
ARKit
Hi, I downloaded a few files from apple developers website, and they are in .reality file format. I wanted to see how they are constructed, but there is no way to open them and look at the content inside (3d model, shader, animation, etc). Is there a way to look at the content of .reality file?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
Hello,
I'm trying to view the components of an Entity I'm creating in RealityKit by reading from a USDZ file. I have the following code snippet in my app.
if let appleEntity = try? Entity.loadModel(named: "apple_tile") {
let c = appleEntity.components
for comp in c { // <- compiler error here
print(comp)
}
}
The compiler error I'm receiving says "For-in loop requires 'Entity.ComponentSet' to conform to 'Sequence'". However, I thought this was the case, according to the documentation for Entity.ComponentSet?
Curious if anyone else has had this problem. Running XCode 15.4, and my Swift version is
xcrun swift -version
swift-driver version: 1.90.11.1 Apple Swift version 5.10 (swiftlang-5.10.0.13 clang-1500.3.9.4)
Target: x86_64-apple-macosx14.0
We were having an issue wrb the system rotate and scale gestures (two-handed gestures / RotateGesture3D and MagnifyGesture) were extremely difficult to register (make work) in the visionOS simulator.
The solution we found was to:
Launch your app in the simulator
Move the pointer on top of the 3D object for which you are testing rotation and scaling gestures.
Press and hold the Option key to display touch points (ie: the two-handed gesture points).
While maintaining the option key pressed, release the pointer and re-enable it again. I am using a track pad with tap-to-click enabled and three-finger to drag enabled in accessibility, so "release the pointer and re-enable it again" translates simply to removing the three finger and placing them again on the trackpad.
If you have maintained the option key pressed, then you should now be able to rotate and scale the 3D object.
Context if you are interested:
Our issue was also occurring in Apple's own sample project relating to gestures "Transforming RealityKit entities using gestures", at below link.
On Apple's article "Interacting with your app in the visionOS simulator" at the below link, for two-handed gestures it states "Press and hold the Option key to display touch points. Move the pointer while pressing the Option key to change the distance between the touch points. Move the pointer and hold the Shift and Option keys to reposition the touch points."
This simply did not work anymore for rotation and scaling gestures.
These gestures used to be a lot more responsive in Sonoma. Either the article should be updated to what I described above, or there is an issue. Our colleague who is using macOS Sonoma 14.6.1 with the latest release of Xcode is not having these issues.
Here is the list of configurations (troubleshooting we tried!) where it is difficult to achieve rotation and scaling gestures in the visionOS simulator:
macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.1
macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.0
macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.1
macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.0
macOS Sequoia 16.1 Beta, remove all Xcodes and installed the build from AppStore (Xcode 16.1)
macOS Sequoia 16.1 Beta, Xcode 16.0 w visionOS 2.0
completely wiped out, and reset entire development machine, re-installed latest releases of sequoia (15.1) and xcode (15.1))
Throughout these troubleshooting I often:
restarted both xcode and sim
erased all derived data
erased all contents and settings from sims
performed fresh git clones
None of the above worked, only the workaround described above works atm. As you can maybe deduce, it was very time consuming to find the workaround, we also wasted some development effort thinking our gesture development was no-good.
Hopefully this will help other devs.
Article Link:
https://developer.apple.com/documentation/xcode/interacting-with-your-app-in-the-visionos-simulator
Gesture sample project link:
https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
I am developing an ARKit based application that requires plane detection of the tabletop at which the user is seated. Early testing was with an iPhone 8 and iPhone 8+. With those devices, ARKit rapidly detected the plane of the tabletop when it was only 8 to 10 inches away. Using iPhone 15 with the same code, it seems to require me to move the phone more like 15 to 16 inches away before detecting the plane of the table. This is an awkward motion for a user seated at a table. To validate that it was not necessarily a feature of my code, I determined that the same behavior results with Apple's sample AR Interaction application. Has anyone else experienced this, and if so, have suggestions to improve the situation?
I am working on a project that requires access to the main camera on the Vision Pro. My main account holder applied for the necessary enterprise entitlement and we were approved and received the Enterprise.license file by email. I have added the Enterprise.license file to my project, and manually added the com.apple.developer.arkit.main-camera-access.allow entitlement to the entitlement file and set it to true since it was not available in the list when I tried to use the + Capability button in the Signing & Capabilites tab.
I am getting an error: Provisioning profile "iOS Team Provisioning Profile: " doesn't include the com.apple.developer.arkit.main-camera-access.allow entitlement. I have checked the provisioning profile settings online, and there is no manual option for adding the main camera access entitlement, and it does not seem to be getting the approval from the license.
When I've made an animated UDSZ, at what framerate will the animation be rendered in QuickLook? Is it the same across all devices? (iPhone, Apple Vision Pro, etc.) and viewing environments? (QuickLook, inside an ARView, etc.)
Suppose I export my file at 30fps and the device draws at 60fps, does the device interpolate between frames automatically, animate at a lower frame rate, or play it at twice the speed? What if it were 24fps?
My primary concern with understanding frame rates is a bit of trouble I've had making perfectly looping animations. There always seems to be the slightest stutter between iterations.
Thanks in advance for any insights you're able to provide!