i'd like to have a little bit control over the transparency of the videomaterial. is there any way to prepare a shadergraph unlit shader and use it with the videomaterial.
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
I posted https://developer.apple.com/forums/thread/809481 yesterday about an issue I discovered with pushWindow in visionOS 26.2 RC, but today I discovered a second problem with pushWindow.
If window A calls pushWindow to present window B, and the user pins window B to a wall, the following unexpected behaviors are observed:
Window B spontaneously disappears.
If the user re-launches the (still running) app from the visionOS home view, both window A and window B appear simultaneously. I assume only window B should be visible at this point, since window A pushed window B.
If the user closes window B, it's now impossible to present window B again. Calls to pushWindow appear to be ignored.
If the user force-quits the app and relaunches it, and pushWindow is called again, window B appears, but window A remains visible.
I also noticed this surprising behavior:
This broken state of pushWindow behavior now affects all other apps on the system that may call pushWindow in the future, not just the app whose pushed window was pinned above.
A workaround is to reboot the device, and then the system will behave as expected until the next time the user pins a pushed window.
After adding TextComponents to my Entities on visionOS, I have observed that visualBounds will ignore the TextComponents.
Documentation states that it should render a rounded rectangle mesh. These mashes are visible on the device, but not visible in the debugger ("Capture Entity Hierarchy") and ignored by visualBounds.
Am I missing something?
static func makeDirection(_ direction: Direction) -> Entity {
let text = Entity()
text.name = direction.rawValue
text.setScale(SIMD3(repeating: 5), relativeTo: nil)
text.transform.rotation = direction.rotation
text.components.set(direction.textComponent)
return text
}
My workaround is to add a disabled ModelEntity and take its bounds 😬
I recently added pushWindow to my app, and I discovered that in visionOS 26.2 RC (23N301), pushWindow followed by dismissWindow no longer works as expected.
Specifically, if the user moves the pushed window, then when the pushed window is later dismissed, the parent window's position isn't aligned with the pushed window's new position. Its original position is restored instead.
Curiously, the bug only happens when an app is launched from the visionOS home view, and not when an app is launched from Xcode. It also doesn't happen in the visionOS 26.2 simulator.
Another interesting detail is that while the parent window is hidden, if the user long-presses the Digital Crown and then dismisses the pushed window, the parent window's position seems to be immune from the Digital Crown scene reorientation. It's restored to its original real world position.
Demo video: https://youtu.be/zR3t2ON3Wz0
I've submitted feedback as FB21287011 with a sample app and detailed repro steps.
Has anyone else encountered this issue already and figured out a workaround? It would be nice if I could get pushWindow to work correctly in my app.
Thanks everybody! 😀
We're trying to switch from using main camera access on Arkit to screen-capture with passthrough however we're facing some issues and it seems a bit complicated to debug.
We have set up a broadcast Extension, set up some logs on the sample Handler but we get nothing in the console nor that the recording starts, we set up the picker as well and we can see our extension in the control center as one of the choices but clicking start, results in it stopping in less than one second after.
The only message that is rather contradictory we see in the console.app is the following
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license
and just right after
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
I'm trying to develop an app that broadcasts what the user sees (priorly we were using main camera access) but now we'd like to investigate and try with this option.
I have set up the BroadcastExtension, I've added the picker, I click on my button, I can see my broadcast extension in the options list in the control center, once I click start, it stops after 1 second more or less.
I'm not able to get anything in the console from my Sample Handler (prints or logs or anything).
I can see however in the console.app some misleading information (one after the other):
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license
[INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
We have the entreprise license, the capability and I did add the capability on the extension target as well.
Hi all,
I am currently developing a game in Unity for VisionOS and I'd prefer to use the PSVR2 controllers as a source of the raycast for menu selection instead of the default VisionOS gaze for my specific use case. Is there a way to access the IMU of PSVR2 controllers to do this instead of just using eyegaze + controller click for selection? Is there a specific configuration for GCController from within Unity maybe?
Thank you!
I exported some usd assets from IsaacSim but they are not showing up correctly on my Apple Vision Pro.
Even though the mesh looks to be the correct color in Finder and I can see the Diffuse Color looks correct, the object is still just gray. It should be green!
Is there any interest in this forum for those developing for the spatial web and safari. I can't seem to find any posts that are relevant here.
I'm capturing a room via RoomPlan API and would like to access the DepthMap(sceneDepth) or SmoothDepthMap(smoothedSceneDepth) from my own provided ARSession for RoomCaptureSession.
But both depth maps are empty when handling the delegates. I have not found a solution yet. So is it even possible? Because i have not found any documentation of what RoomCaptureSession overwrites in the ARSession if I provide my own ARSession instance.
Here is a example code snippet of what i'm trying to do:
private let arSession = ARSession()
private lazy var roomPlanCaptureSession = RoomCaptureSession(arSession: arSession)
let arConfig = ARWorldTrackingConfiguration()
//Create semantics for ARconfig which is used for ARSession
var semantics: ARWorldTrackingConfiguration.FrameSemantics = []
if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth) {
semantics.insert(.sceneDepth)
}
if ARWorldTrackingConfiguration.supportsFrameSemantics(.smoothedSceneDepth) {
semantics.insert(.smoothedSceneDepth)
}
arConfig.frameSemantics = semantics
//set delegates
roomPlanCaptureSession.delegate = self
arSession.delegate = self
//Check if device support for depthMap
if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth){
arSession.run(arConfig)
}
else{
print(".sceneDepth is unsupported.")
}
//run roomcapture scan config
let captureConfig = RoomCaptureSession.Configuration()
roomPlanCaptureSession.run(configuration: captureConfig)
//trying to get sceneDepth
public func session(_ session: ARSession, didUpdate frame: ARFrame) {
print("session delegate capture: sceneDepth: \(String(describing: frame.sceneDepth))")
//prints: session delegate capture: sceneDepth: nil
also in this video from 2023 it is say that i can pass custom ARSession to my RoomPlan.
Explore enhancements to RoomPlan - Video
Quote 3:00: Here is the init and stop function in previous RoomPlan. And here is how you pass over a custom ARSession to init function. Any custom ARSession with ARWorldTrackingConfiguration will be honored inside RoomCaptureSession.
anyway I welcome any input. maybe im doing something wrong. :)
I have this problem on VisionOS. When I dismiss and reopen a window from a ImagePresentationComponent, the window misses the resize ui elements when I look at the window corners. The rest of the window ui elements (drag, close...) are there. Resizing was possible before the window was dismissed.
The code is something like this:
WindowGroup(id: "image-display-window",.....
}
.windowResizability(.automatic)
.windowStyle(.plain)
I call dismissWindow() from the window view and it is dismissed correctly.
Then I call openWindow(id: "image-display-window", value: data) from another view to reopen it. It reopens but it missing the possibility to resize.
Anyone knows how to fix this?
Thanks.
Environment Versions
・macOS15.6.1
・visionOS26.0.1
・Xcode16.1 or 26.0.1
・unity6000.2.9f1
・Apple.core3.2.0
・Apple.PHASE1.2.7
・polyspatial2.4.2
With the above environment, after installing Apple.PHASE into unity and building to a visionOS device, Audio is available and distance attention works, but Early Reflection and Late Reverb produce no audible change even when checked and their parameters are adjusted.
What is required to make Early Reflection and Late Reverb take effect on a visionOS device build?
action taken
・created a SoundEvent.
・in composer, created a Sampler and a SpatialMixer; attached an AudioClip to the Sampler; enabled Direct Path, Early Reflection, and Late Reverb on the SpatialMixer.
・attached a PHASE Source to the object to be played, attached the created SoundEvent to it, and set non-zero values for Early Reflection and Late Reverb.
・attached a PHASE Listener to the mainCamera and set the ReverbPreset to a value other than None.
・in project settings > Audio, set Spatializer plugin to PHASE Spatializer.
・from there, build for visionOS.
I have an iOS app that can display a USDZ model downloaded from the Internet (and cached locally) via an ARView.
I would like to light that model with an image based light (IBL) also downloaded from the Internet.
However, as far as I can tell, ARView can only create an IBL from a resource that has been compiled into the Xcode project and loaded with EnvironmentResource(named:in:) or EnvironmentResource.load(named:in:).
Is there a way to create an EnvironmentResource from an HDRI via a file URL to use in ARView in iOS?
Hi,
When viewing a spatial photo scene on the Apple Vision Pro Photos app, you can tap on the immersive icon on the top right corner to transaction from the window presenting the image as spatial3d to an immersive photo scene with spatial3DImmersive where the window borders disappear. Could someone explain how to achieve that? I tried to do it but once I transition from spatial3d to spatial3DImmersive I can see still see a rectangle around the spatial image.
Thanks.
Hi team,
I believe I’ve found a registration issue between ARFrame.sceneDepth and ARFrame.capturedImage when using high-resolution frame capture on a 2022 iPad Pro (6th gen).
When enabling high-resolution capture:
if let highResFormat = ARWorldTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing {
config.videoFormat = highResFormat
}
…
arView.session.captureHighResolutionFrame { ... }
the depth map provided by ARFrame.sceneDepth no longer aligns correctly with the corresponding high-resolution capturedImage.
This misalignment results in consistently over-estimated distance measurements in my app (which relies on mapping depth to 2D pixel coordinates).
iPad Pro (6th gen): misalignment occurs only when capturing high-resolution frames.
iPhone 16 Pro: depth is correctly registered for both standard and high-resolution captures.
It appears the camera intrinsics, specifically the FOV, change between the “regular” resolution stream and the high-resolution capture on the iPad. My suspicion is that the depth data continues using the intrinsics of the lower resolution stream, resulting in an unregistered depth-to-RGB mapping.
Once I have the iPad in hand again, I will confirm whether camera.intrinsics or FOV differ between the low-res and high-res frames.
Is this a known issue with high-resolution frame capture on the 2022 iPad Pro? If not, I’m happy to provide some more thorough sample code.
Thanks for your time!
https://developer.apple.com/cn/augmented-reality/tools/. Why is this address missing? Reality Converter, what should we use now to convert the model
Apple's WWDC video What’s new for the spatial web says the spatial-backdrop markup may change as it goes through the standards process (at 27:26 mark).
I have started adding spatial-backdrops to web pages, so I want to keep an eye out for status updates by Apple and follow the standards progress.
Is there any place I can keep an eye on this standards process?
Has Apple announced any feature updates or news on spatial-backdrops?
After updating to visionOS 26.2 Beta 2 (and Beta 3), I'm unable to establish a spatial connection to Vision Pro. This was working fine before the update.
To test, I've created a fresh spatialApp project from the Xcode template with zero modifications, but I'm hitting the same issue - the Vision Pro is discovered but won't connect.
Am I forgetting to update the config somewhere? Any ideas what might be causing this and how to fix it?
Thanks!
Warning: -[NSWindow makeKeyWindow] called on <NSWindow: 0xa1f811900> windowNumber=1b9 which returned NO from -[NSWindow canBecomeKeyWindow].
((processConfiguration != nil && configuration != nil) || (processConfiguration == nil && configuration == nil)) - /AppleInternal/Library/BuildRoots/4~CBS0ugAIF7BrQZjLe6r0lhPXO4GJmNDTovxYoV0/Library/Caches/com.apple.xbs/Sources/ExtensionKit/ExtensionKit/Source/HostViewController/Internal/EXHostSessionDriver.m:80: `processConfiguration` and `configuration` must be both non-nil or both nil
Unable to obtain a task name port right for pid 415: (os/kern) failure (0x5)
CCContextDeviceGroup.mm(291):+[CCContextDeviceGroup checkBinaryArchivesForDevice:withBundle:]:
Failed to find any binary shader archive
Topic:
Spatial Computing
SubTopic:
General
My development team admin requested the Enterprise API for camera access on the vision pro. We got that granted, got a license for usage, and got instructions for integrating it with next steps.
We did the following:
Even when I try to download and run the sample project for "Accessing the Main Camera", and follow all the exact instructions mentioned here: https://developer.apple.com/documentation/visionos/accessing-the-main-camera
I am just unable to receive camera frames.
I added the capabilities, created a new provisioning profile with this access, added the entitlements to info.plist and entitlements, replaced the dummy license file with the one we were sent, and also have a matching bundle identifier and development certificate, but it is still not showing camera access for some reason.
"Main Camera Access" shows up in our Signing & Capabilities tab, and we also added the NSMainCameraDescription in the Info.plist and allow access while opening the app. None of this works. Not on my app, and not on the sample app that I just downloaded and tried to run on the Vision Pro after replacing the dummy license file.
Hi there,
I received an enterprise license file to include enhanced object tracking configuration for the Vision Pro. My account is part of the team which got the allowance from Apple to use this capability. Unfortunately, although I followed the guide, I do not find the Object Tracking capability when I try to add it to my project. There are other capabilities like Main Camera on the Vision Pro, but not for Object Tracking. I am using Xcode 26.1 and visionOS 26.1. What am I missing here?
Thanks in advance,
Matthias