https://developer.apple.com/documentation/realitykit/videomaterial
The documentation: "Video materials support transparency if the source video’s file format also supports transparency."
I have a transparency video(Hand.mov, HEVC with alpha), I can show the video with transparency background correctly on Vision Pro Simulates, but on physic Device the video has a black background. I'm sure the video format is ok because I can see get the texture from video and display it on an UnlitMaterial.
How can I show the transparency video correctly with the RealityKit/VideoMaterial?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
Environment
visionOS 26.1, Xcode 26.1.1
Problem
When a WindowGroup opens an ImmersiveSpace and the user closes the window via X button, the async Task in .onDisappear gets cancelled before dismissImmersiveSpace() completes, leaving the ImmersiveSpace active with no way to exit.
Steps
WindowGroup opens ImmersiveSpace in .onAppear
User clicks X to close window
.onDisappear fires but async cleanup cancelled
ImmersiveSpace remains active, user trapped
Expected
ImmersiveSpace dismissed when window closes
Actual
ImmersiveSpace remains active
Code
.onAppear {
Task {
await openImmersiveSpace(id: "VideoCallMainCamera")
}
}
.onDisappear {
Task {
await dismissImmersiveSpace() // Gets cancelled
}
}
What I've Tried
Task in .onDisappear ❌
scenePhase monitoring ❌
High priority Task ❌
.restorationBehavior(.disabled) + .defaultLaunchBehavior(.suppressed) ✅ (prevents restoration but doesn't fix immediate cleanup)
Question
What's the recommended pattern for ensuring ImmersiveSpace cleanup when WindowGroup closes? Is there a way to block window closure until async cleanup completes, or should ImmersiveSpaces automatically dismiss with their parent window?
We have been using RoomPlan in our app for 2+ years. Through a combination of in-app and manual coaching on scanning best practices, most users are able to achieve high-quality scans on a consistent basis.
In recent weeks, however, we have observed an increase in reports of degraded scanning performance, even from veteran users who had not previously encountered issues. The RoomCaptureView overlay is jittery and crooked, and the resulting scan file has significant issues, even for simple, well-lit rectangular rooms.
It is difficult to troubleshoot these issues given the number of variables at play, and the overall volume of reports is still relatively low, but we'd appreciate any guidance on known issues or workarounds that could help unblock our users who are being affected by this. I noticed that this post includes an acknowledgement of FB14454922 and FB15035788. Our issues seem slightly different as the scans are simply inaccurate and jittery without failing outright. I haven't found any other threads on similar issues.
I posted https://developer.apple.com/forums/thread/809481 yesterday about an issue I discovered with pushWindow in visionOS 26.2 RC, but today I discovered a second problem with pushWindow.
If window A calls pushWindow to present window B, and the user pins window B to a wall, the following unexpected behaviors are observed:
Window B spontaneously disappears.
If the user re-launches the (still running) app from the visionOS home view, both window A and window B appear simultaneously. I assume only window B should be visible at this point, since window A pushed window B.
If the user closes window B, it's now impossible to present window B again. Calls to pushWindow appear to be ignored.
If the user force-quits the app and relaunches it, and pushWindow is called again, window B appears, but window A remains visible.
I also noticed this surprising behavior:
This broken state of pushWindow behavior now affects all other apps on the system that may call pushWindow in the future, not just the app whose pushed window was pinned above.
A workaround is to reboot the device, and then the system will behave as expected until the next time the user pins a pushed window.
After adding TextComponents to my Entities on visionOS, I have observed that visualBounds will ignore the TextComponents.
Documentation states that it should render a rounded rectangle mesh. These mashes are visible on the device, but not visible in the debugger ("Capture Entity Hierarchy") and ignored by visualBounds.
Am I missing something?
static func makeDirection(_ direction: Direction) -> Entity {
let text = Entity()
text.name = direction.rawValue
text.setScale(SIMD3(repeating: 5), relativeTo: nil)
text.transform.rotation = direction.rotation
text.components.set(direction.textComponent)
return text
}
My workaround is to add a disabled ModelEntity and take its bounds 😬
I recently added pushWindow to my app, and I discovered that in visionOS 26.2 RC (23N301), pushWindow followed by dismissWindow no longer works as expected.
Specifically, if the user moves the pushed window, then when the pushed window is later dismissed, the parent window's position isn't aligned with the pushed window's new position. Its original position is restored instead.
Curiously, the bug only happens when an app is launched from the visionOS home view, and not when an app is launched from Xcode. It also doesn't happen in the visionOS 26.2 simulator.
Another interesting detail is that while the parent window is hidden, if the user long-presses the Digital Crown and then dismisses the pushed window, the parent window's position seems to be immune from the Digital Crown scene reorientation. It's restored to its original real world position.
Demo video: https://youtu.be/zR3t2ON3Wz0
I've submitted feedback as FB21287011 with a sample app and detailed repro steps.
Has anyone else encountered this issue already and figured out a workaround? It would be nice if I could get pushWindow to work correctly in my app.
Thanks everybody! 😀
We're trying to switch from using main camera access on Arkit to screen-capture with passthrough however we're facing some issues and it seems a bit complicated to debug.
We have set up a broadcast Extension, set up some logs on the sample Handler but we get nothing in the console nor that the recording starts, we set up the picker as well and we can see our extension in the control center as one of the choices but clicking start, results in it stopping in less than one second after.
The only message that is rather contradictory we see in the console.app is the following
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license
and just right after
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
I'm trying to develop an app that broadcasts what the user sees (priorly we were using main camera access) but now we'd like to investigate and try with this option.
I have set up the BroadcastExtension, I've added the picker, I click on my button, I can see my broadcast extension in the options list in the control center, once I click start, it stops after 1 second more or less.
I'm not able to get anything in the console from my Sample Handler (prints or logs or anything).
I can see however in the console.app some misleading information (one after the other):
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
We have the entreprise license, the capability and I did add the capability on the extension target as well.
Hi all,
I am currently developing a game in Unity for VisionOS and I'd prefer to use the PSVR2 controllers as a source of the raycast for menu selection instead of the default VisionOS gaze for my specific use case. Is there a way to access the IMU of PSVR2 controllers to do this instead of just using eyegaze + controller click for selection? Is there a specific configuration for GCController from within Unity maybe?
Thank you!
Is there any interest in this forum for those developing for the spatial web and safari. I can't seem to find any posts that are relevant here.
I have this problem on VisionOS. When I dismiss and reopen a window from a ImagePresentationComponent, the window misses the resize ui elements when I look at the window corners. The rest of the window ui elements (drag, close...) are there. Resizing was possible before the window was dismissed.
The code is something like this:
WindowGroup(id: "image-display-window",.....
}
.windowResizability(.automatic)
.windowStyle(.plain)
I call dismissWindow() from the window view and it is dismissed correctly.
Then I call openWindow(id: "image-display-window", value: data) from another view to reopen it. It reopens but it missing the possibility to resize.
Anyone knows how to fix this?
Thanks.
Environment Versions
・macOS15.6.1
・visionOS26.0.1
・Xcode16.1 or 26.0.1
・unity6000.2.9f1
・Apple.core3.2.0
・Apple.PHASE1.2.7
・polyspatial2.4.2
With the above environment, after installing Apple.PHASE into unity and building to a visionOS device, Audio is available and distance attention works, but Early Reflection and Late Reverb produce no audible change even when checked and their parameters are adjusted.
What is required to make Early Reflection and Late Reverb take effect on a visionOS device build?
action taken
・created a SoundEvent.
・in composer, created a Sampler and a SpatialMixer; attached an AudioClip to the Sampler; enabled Direct Path, Early Reflection, and Late Reverb on the SpatialMixer.
・attached a PHASE Source to the object to be played, attached the created SoundEvent to it, and set non-zero values for Early Reflection and Late Reverb.
・attached a PHASE Listener to the mainCamera and set the ReverbPreset to a value other than None.
・in project settings > Audio, set Spatializer plugin to PHASE Spatializer.
・from there, build for visionOS.
Hi,
When viewing a spatial photo scene on the Apple Vision Pro Photos app, you can tap on the immersive icon on the top right corner to transaction from the window presenting the image as spatial3d to an immersive photo scene with spatial3DImmersive where the window borders disappear. Could someone explain how to achieve that? I tried to do it but once I transition from spatial3d to spatial3DImmersive I can see still see a rectangle around the spatial image.
Thanks.
https://developer.apple.com/cn/augmented-reality/tools/. Why is this address missing? Reality Converter, what should we use now to convert the model
Apple's WWDC video What’s new for the spatial web says the spatial-backdrop markup may change as it goes through the standards process (at 27:26 mark).
I have started adding spatial-backdrops to web pages, so I want to keep an eye out for status updates by Apple and follow the standards progress.
Is there any place I can keep an eye on this standards process?
Has Apple announced any feature updates or news on spatial-backdrops?
After updating to visionOS 26.2 Beta 2 (and Beta 3), I'm unable to establish a spatial connection to Vision Pro. This was working fine before the update.
To test, I've created a fresh spatialApp project from the Xcode template with zero modifications, but I'm hitting the same issue - the Vision Pro is discovered but won't connect.
Am I forgetting to update the config somewhere? Any ideas what might be causing this and how to fix it?
Thanks!
Warning: -[NSWindow makeKeyWindow] called on <NSWindow: 0xa1f811900> windowNumber=1b9 which returned NO from -[NSWindow canBecomeKeyWindow].
((processConfiguration != nil && configuration != nil) || (processConfiguration == nil && configuration == nil)) - /AppleInternal/Library/BuildRoots/4~CBS0ugAIF7BrQZjLe6r0lhPXO4GJmNDTovxYoV0/Library/Caches/com.apple.xbs/Sources/ExtensionKit/ExtensionKit/Source/HostViewController/Internal/EXHostSessionDriver.m:80: `processConfiguration` and `configuration` must be both non-nil or both nil
Unable to obtain a task name port right for pid 415: (os/kern) failure (0x5)
CCContextDeviceGroup.mm(291):+[CCContextDeviceGroup checkBinaryArchivesForDevice:withBundle:]:
Failed to find any binary shader archive
Topic:
Spatial Computing
SubTopic:
General
Hi everyone,
I’m trying to verify something mentioned in the WWDC session “Explore enhancements to your spatial business app.”
At timestamp 3:36, the presenter states:
“You can now access your enterprise license files directly within your Apple Developer account.”
I’ve checked every section of my Developer account, including:
• Membership and Agreements
• Certificates, Identifiers & Profiles
• App Store Connect
• Additional Resources
• Account settings
…but no UI or section exposes these enterprise license files.
Since the Vision Entitlement Services framework actively checks these licenses (for example, mainCameraAccess entitlement approval), I need to confirm the location of the new license file.
Could someone from Apple or anyone who has seen this feature clarify:
1. Where exactly do these enterprise license files appear in the Developer account UI, or
2. Whether this feature has not rolled out yet?
Any guidance or screenshots from those who have access would be invaluable.
Thanks,
Topic:
Spatial Computing
SubTopic:
General
Tags:
Enterprise
Entitlements
Business and Enterprise
visionOS
Apple's new Spatial Personas use Gaussian Splatting,
but I have not found any APIs for visionOS to display a Gaussian Splat like a PLY file.
Am I just missing the Apple documentation? If not, are there common practices developers are using for displaying Gaussian Splats in visionOS?
Summary
After updating to visionOS 26, we’ve encountered severe transparency rendering issues in RealityKit that did not exist in visionOS 2.6 and earlier.
These regressions affect applications that dynamically control scene opacity (via OpacityComponent).
Our app renders ultra-realistic apartment environments in real time, where users can walk or teleport inside 3D spaces. When the user moves above a speed threshold, we apply a global transparency effect to prevent physical collisions with real-world objects.
Everything worked perfectly in visionOS 2.6 — the problems appeared only after upgrading to 26.
Scene Setup Overview
The environment consists of multiple USDZ models (e.g., architecture, rooms, furniture).
We manage LODs manually for performance (e.g., walls and floors always visible in full-res, while rooms swap between low/high-res versions based on user position and field of view).
Transparency is achieved using OpacityComponent, applied dynamically when the user moves.
Some meshes (e.g., portals to skyboxes, glass windows) use alpha materials
We also use OcclusionMaterials to prevent things to be seen through walls when scene is transparent
Observed Behavior by Scenario
(I can share a video showing the results of each scenario if needed.)
Scenario 1 — Severe Flickering (Root Opacity)
Setup:
OpacityComponent applied to the root entity
NO ModelSortGroupComponent used
Symptoms:
Strong flickering when transparency is active
Triangles within the same mesh render at inconsistent opacity levels
Appears as if per-triangle alpha sorting is broken
Workaround:
Moving the OpacityComponent from the root to each individual USDZ entity removes the per-triangle flicker
Pros:
No conflicts with portals or alpha materials
Scenario 2 — Partially Stable, But Alpha Conflicts
Setup:
OpacityComponent applied per USDZ entity
ModelSortGroupComponent(planarUIAlwaysBehind) applied to portal meshes
Other entities have NO ModelSortGroupComponent
Symptoms:
Frequent alpha blending conflicts:
Transparent surfaces behind other transparent surfaces flicker or disappear
Example: Wine glasses behind glass doors — sometimes neither is rendered, or only one
Even opaque meshes behind glass flicker due to depth buffer confusion
Alpha materials sometimes render portals or the real world behind them, ignoring other geometry entirely
Analysis:
Appears related to internal changes in alpha sorting or depth pre-pass behavior introduced in visionOS 26
Pros:
Most stable setup so far
Cons:
Still unreliable when OpacityComponent is active
Scenario 3 — Layer Separation Attempt (Regression)
Setup:
Same as Scenario 2, but:
Entities with alpha materials moved to separate USDZs
Explicit ModelSortGroupComponent order set (alpha surfaces rendered last)
Symptoms:
Transparent surfaces behind other transparent surfaces flicker or disappear
Depth is completely broken when there's a large transparent surface
Alpha materials sometimes render portals or the real world behind them, ignoring other geometry entirely
Workaround Attempt:
Re-ordering and further separating models did not solve it
Pros:
None — this setup makes transparency unusable
Conclusion
There appears to be a regression in RealityKit’s handling of transparency and sorting in visionOS 26, particularly when:
OpacityComponent is applied dynamically, and
Scenes rely on multiple overlapping transparent materials.
These issues did not exist prior to 26, and the same project (no code changes) behaves correctly on previous versions.
Request
We’d appreciate any insight or confirmation from Apple engineers regarding:
Whether alpha sorting or opacity blending behavior changed in visionOS 26
If there are new recommended practices for combining OpacityComponent with transparent materials
If a bug report already exists for this regression
Thanks in advance!
Game Controller Input Limitations in visionOS Volumetric Windows
Hello Apple Developer Community,
I'm developing a game for visionOS and have encountered significant limitations with game controller input when using volumetric windows (WindowGroup with .volumetric style). I'd appreciate clarification on whether this is expected behavior and any guidance on best practices.
🧩 Issue Summary
When using a DualSense controller with a volumetric window in visionOS, only a subset of controller inputs are available to the app. The remaining inputs appear to be reserved by the system for UI navigation.
✅ Working Inputs (Volumetric Window)
D-Pad (all directions)
L3 (left thumbstick button click)
R3 (right thumbstick button click)
Menu button
Options button
❌ Not Working Inputs (Volumetric Window)
Left thumbstick analog movement (used for UI scrolling instead)
Right thumbstick analog movement (used for UI scrolling instead)
Face buttons (Cross, Circle, Square, Triangle / A, B, X, Y)
Shoulder buttons (L1, R1)
Triggers (L2, R2)
Key observation: When moving the left thumbstick in a volumetric window, the window's UI scrolls vertically instead of sending input to my app's GameController handlers. Similarly, face buttons seem to be reserved for system UI interactions.
⚙️ Implementation Details
I'm using the standard GameController framework:
Connect to controller via GCController.controllers()
Access extendedGamepad profile
Set up valueChangedHandler and pressedChangedHandler for all inputs
Handlers confirmed registered via logging
Working inputs (D-Pad, L3, R3) trigger immediately and consistently
Non-working inputs (thumbsticks, face buttons) never trigger
🧠 Critical Finding: ImmersiveSpace Works Perfectly
When testing the exact same code in an ImmersiveSpace (.mixed immersion style), all controller inputs work perfectly:
✅ Both thumbsticks provide full analog input
✅ All face buttons trigger their handlers
✅ All shoulder buttons and triggers work correctly
✅ 100% success rate with no intermittent issues
This suggests the issue isn't with my code, but rather how visionOS handles controller input differently between Volumetric Windows and ImmersiveSpace.
🧪 Test Environment
I created a minimal test project (Controller-Playground) to isolate the issue:
A simple ControllerTester class that registers all GameController handlers
A visual UI showing real-time input state
No game logic, RealityKit physics, or other complexity
Results
In volumetric window: Only D-Pad, L3, R3, Menu, Options work
In ImmersiveSpace: All inputs work perfectly
This confirms the limitation exists at the visionOS platform level, not in app code.
🧰 Attempted Workarounds
I tried the following without success:
Setting GCSupportsControllerUserInteraction = false in Info.plist
Setting UIRequiresFullScreen = true
Changing window styles (.plain, .volumetric)
Polling vs. handler-based input approaches
Various threading models (MainActor, separate thread)
Result: The only way to enable full controller support is to switch to ImmersiveSpace.
❓ Questions for Apple
Is this input reservation behavior in volumetric windows intended and documented?
Are game controllers expected to have limited functionality in volumetric windows while full functionality is reserved for ImmersiveSpace?
Is there a way to request full controller input access in a volumetric window, or is ImmersiveSpace the only option for complete controller support?
Where can I find official documentation about controller input differences between window types?
Are there any APIs or configuration options to disable system controller shortcuts in volumetric windows?
🎯 Impact
This limitation has a significant effect on game design and architecture:
Volumetric windows offer a multitasking-friendly, less immersive experience
ImmersiveSpace provides full controller support but may be more immersive than some games require
Games that only need basic D-Pad and button input can work fine in volumetric windows
Games requiring analog sticks or face buttons must currently use ImmersiveSpace
It would be very helpful if Apple could clarify or reference existing documentation regarding controller input handling in different visionOS window types. If such documentation doesn't exist yet, it might be valuable to include this information in future developer guides or best-practice documents.
🕹 Current Workaround
For now, I'm using:
D-Pad for character movement (digital 8-direction)
R3 (right stick click) as a substitute for the "X" button
This setup allows the game to function within a volumetric window, though full controller support still requires ImmersiveSpace.
📄 Request
If this is expected behavior, I may have simply missed the relevant documentation — could you please point me to any existing resources that explain this design?
If there isn't one yet, it would be great if future visionOS documentation could:
Clearly outline controller input behavior across window types
Provide guidance on when to use Volumetric Windows vs. ImmersiveSpace for games
Consider adding an API option to request full controller access when appropriate
If this is not expected behavior, I'm happy to file a detailed bug report with sample code.
💻 System Information
visionOS: Latest Simulator
Xcode: Latest version
Controller: Sony DualSense
Framework: GameController (standard extendedGamepad profile)
Test project: Minimal reproducible example available
Thank you for any clarification or guidance you can provide. This information would be valuable for many developers working on visionOS games.