Does visionOS 2 still prompt the user with a permission alert when a full immersive space is presented?
In visionOS 1, the first time an app presented an immersive space, the user was prompted with an alert to grant permission. openImmersiveSpace would return an error code if the user opted not to grant permission. In visionOS 1, it was important to handle this case correctly.
In visionOS 1, the Settings > Developer menu had an option to reset the immersive user's space permission prompting state so developers could test this interaction flow.
In visionOS 2, I no longer see the full immersive space permissions alert. I can't remember if I saw it once, the first time visionOS 2.0 beta was installed, or if I never saw it at all. The Settings > Developer menu no longer has an option to reset the permission prompting state. I can't find any way to test the interaction flow in my app to make sure that it will work correctly for users.
Does visionOS 2 no longer ask for full immersive space permission at all? I can't find this change documented anywhere.
If visionOS 2 does prompt the user for permission, is there any way to reproduce and test this interaction flow so I can make sure my app handles it correctly?
Thanks for taking the time to answer this question.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
This question is about USD animations playing correctly in macOS Preview but not with RealityKit on visionOS.
I have a USD file created with 3D Studio Max that contains mesh-based smoke animation:
https://drive.google.com/file/d/1L7Jophgvw0u0USSv-_0fPGuCuJtapmzo/view
(5.6 MB)
Apple's macOS 14.5 Preview app is able to play the animation correctly:
However, when a visionOS app uses RealityKit to load that same USD file in visionOS 2.0 beta 4, built with 16.0 beta 3 (16A5202i), and Entity/playAnimation is called, the animation does not play as expected:
This same app is able to successfully play animation of a hierarchy of solid objects read from a different USD file.
When I inspect the RealityKit entities loaded from the USD file, the ground plane entity is a ModelEntity, as expected, but the smoke entity type is Entity, with no associated geometry.
Why is it that macOS Preview can play the animation in the file, but RealityKit cannot?
Thank you for considering this question.
In iOS 18.0, UICollectionView/scrollToItem(at:at:animated:) causes UICollectionView to scroll with an awkward stuttered animation at about 4 frames per second on the device.
This behavior is not observed in iOS 15, 16, or 17.
I submitted FB13986989 about this issue 6 weeks ago when I first observed it in iOS 18 beta 1, but I have not received a response.
The issue has not been fixed in iOS 18.0 beta 2, 3, or 4.
Is this a known issue? Does Apple intend to fix it?
Assuming Apple has not prioritized fixing this bug, could a UIKit engineer suggest a workaround?
My app relies on scrollTo working properly and now looks awful on iOS 18.
Thank you for any assistance you can provide.
Using Reality Composer Pro 2.0, I created a simple shader graph that displays a texture on an unlit surface:
On visionOS 2 beta, I can successfully use ShaderGraphMaterial(named:from:in:) to load that shader graph material and assign it to a model entity.
However, on visionOS 1.2 and earlier, either in Simulator or on the device, ShaderGraphMaterial(named:from:in:) fails and I see the following logged to the console:
If, using Reality Composer Pro 1.0, I experimentally open the same project and delete and recreate exactly the same nodes above, then ShaderGraphMaterial(named:from:in:) works as expected on visionOS 1.2.
Is it a known issue that Reality Composer 2 can't be used with visionOS 1?
Is this intentional behavior?
I've submitted feedback as FB14828873, including a sample project and repro steps.
If possible, I would appreciate guidance from an Apple engineer, like "This is a known issue for [list of node types]" or "Reality Composer Pro 2 is not supported for visionOS 1 development, please refer to [documentation]" or "We recommend [workaround]."
Thank you.
In my app, I have an ARView that has cameraMode set to nonAR.
I occasionally hide the ARView when it is not needed and reveal it again later.
While the ARView is hidden, I'd like to pause the animation to save iPhone battery life. I'd also like to do this when I know that animation in my scene has paused and the contents of the view, although still visible, is static.
This was possible using SceneKit, but I can't seem to find an equivalent way to do it using RealityKit.
At least as of iOS 18, a hidden ARView with an empty scene appears to use approximately 30% of the CPU.
How can I pause ARView so that it won't use the battery unnecessarily?
Thank you for considering this question.
My experience has been that ModelEntity(named:in:) can be used to load a USD file with a simple structure consisting of entities and model entities, and, critically, it will flatten the entity hierarchy down to a single ModelEntity, presumably reducing the number of draw calls.
However, can anyone verify that the following is true?
If ModelEntity(named:in:) is used to load a USD file from a RealityKit content bundle, it may fail when the USD file contains more complex data, such as shader graph material definitions, or perhaps for some other reason. I am not sure.
AND the error that ModelEntity(named:in:) throws in this case is
Cannot load RealityKitContent entity: Failed to find resource with name "<name>" in bundle
which would literally suggest that the file does not exist, instead of what I assume the error actually is, which is "the file exists but its entity hierarchy could not be flattened to a single ModelEntity" ?
Is that an accurate description of the known behavior of ModelEntity:named:in:)?
I understand that I could use Entity(named:in:) instead, without the flattening feature. My question is really more about the seemingly misleading error message.
Thank you for any clarification you can provide.
In Reality Composer Pro 2.0 (448.120.2), if I use the "Create new scene in the project" button more than once, this feature doesn't seem to work.
The second time I use "Create new scene in the project", Reality Composer Pro displays a new empty USDA file in the project browser with the wrong icon (a yellow document icon instead of a 3D box icon).
Also, it doesn't create the new scene's USDA file on disk or display the scene in entity hierarchy browser or create a new tab.
Consequently, whenever I want to add more than one new scene to my project, I have to repeatedly quit and restart Reality Composer Pro.
In my use of RCP just now, I had to quit and restart RCP six times to create seven new scenes.
Is this a known issue?
Repro steps:
In a Reality Composer Pro project, create a new folder using the "add folder" icon button in the project browser.
Inside the new folder, click the "Create new scene in the project" icon button.
Click the "Create new scene in the project" icon button a second time.
Expected behavior:
A new USDA file is created on disk. The new USDA file's root entity appears in the entity hierarchy browser and a corresponding new tab is created.
Observed behavior:
The USDA file for this scene is not created on disk and it does not appear in the entity hierarchy browser in a new tab. In the project browser view, a yellow document icon appears and it does not appear to correspond to an actual USDA file.
Thank you for any insight you can provide about this issue.
I'm watching the session video "What's new in Xcode 26" and when I try out the new #Playground macro (in the context of a local package in my project) I see only this activity indicator for about 10-15 minutes:
Eventually I do see this error:
Despite the error, my local package does not import SwiftUI and has no dependencies other than Foundation.
I checked the Xcode 26 release notes and I don't see this issue mentioned.
A clean build of my project takes 55 seconds. Is it expected behavior that #Playground would require 10+ minutes to spin up?
Is there anything I can to do make the new #Playground macro work correctly?
Thank you.
On iPadOS 26 beta, the navigation bar can appear inset underneath the status bar (FB18241928)
This bug does not happen on iOS 18.
This bug occurs when a full screen modal view controller without a status bar is presented, the device orientation changes, and then the full screen modal view controller is dismissed.
This bug appears to happen only on iPad, and not on iPhone.
This bug happens both in the simulator and on the device.
Thank you for investigating this issue.
I've loaded a ShaderGraphMaterial from a RealityKit content bundle and I'm attempting to access the initial values of its parameters using getParameter(handle:), but this method appears to always return nil:
let shaderGraphMaterial = try await ShaderGraphMaterial(named: "MyMaterial", from: "MyFile")
let namedParameterValue = shaderGraphMaterial.getParameter(name: "myParameter")
// This prints the value of the `myParameter` parameter, as expected.
print("namedParameterValue = \(namedParameterValue)")
let handle = ShaderGraphMaterial.parameterHandle(name: "myParameter")
let handleParameterValue = shaderGraphMaterial.getParameter(handle: handle)
// Expected behavior: prints the value of the `myParameter` parameter, as above.
// Observed behavior: prints `nil`.
print("handleParameterValue = \(handleParameterValue)")
Is this expected behavior?
Based on the documentation at https://developer.apple.com/documentation/realitykit/shadergraphmaterial/getparameter(handle:) I'd expect getParameter(handle:) to return the value of the parameter, just as getParameter(name:) does.
I've tested this on iOS 18.5 and iOS 26.0 beta 2.
Assuming this getParameter(handle:) works as designed, is the following ShaderGraphMaterial extension an appropriate workaround, or can you recommend a better approach?
Thank you.
public extension ShaderGraphMaterial {
/// Reassigns the values of all named material parameters using the handle-based API.
///
/// This works around an issue where, at least as of RealityKit 26.0 beta 2 and
/// earlier, `getParameter(handle:)` will always return `nil` when used to read the
/// initial value of a shader graph material parameter read using
/// `ShaderGraphMaterial(named:from:in:)`, whereas `getParameter(name:)` will work
/// as expected.
private mutating func copyNamedParametersToHandles() {
for parameterName in self.parameterNames {
if let value = self.getParameter(name: parameterName) {
let handle = ShaderGraphMaterial.parameterHandle(name: parameterName)
do {
try self.setParameter(handle: handle, value: value)
} catch {
assertionFailure("Cannot set parameter value")
}
}
}
}
}
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
visionOS
TL;DR: RealityKit and Reality Composer Pro aren't forward or backward compatible with each other, and the resulting error message is terse and unhelpful. (FB14828873)
So far, I've been sticking with Xcode 16.4 for development and only using Xcode 26.0 beta experimentally.
Yesterday, I used xcode-select to switch to Xcode 26.0 beta 3 to test it, but I forgot to switch back.
Consequently, this morning I unintentionally used the future Reality Composer Pro (the version included with Xcode 26) to make a small change to a USD file.
Now I realize that if I'm unlucky, it's possible Reality Composer Pro may have silently introduced a small change into the USD file that may make RealityKit fail to read the file on iOS 18 and visionOS 2, which in the past has resulted in hours of debugging to track down the source of the failure, often a single line in the USD file that RealityKit can't communicate to me other than with the error "the operation couldn't be completed".
As an analogy, this situation is as if, during regular development (not involving Reality Composer Pro), Xcode didn't warn you about specific API version conflicts, but instead failed with a generic error message, without highlighting the line in your Swift file that was the source of the error.
During regular use, RealityKit generates an excessive amount of internal logging that is not actionable by third party developers. When developing an iOS RealityKit/ARKit app, this makes the Xcode console challenging to use for regular work.
(FB19173812)
See screenshots below.
Xcode does have an option for filtering out logging from specific SDKs, but enabling this feature to suppress the logging of RealityKit and related SDKs like PHASE is something developers have to do dozens of times each day. After a year of developing a RealityKit app, this process becomes frustrating.
If SDKs like Foundation, UIKit, and SwiftUI generated as much logging as RealityKit and related SDKs, Xcode's console would be unusable.
Is there any way to disable the logging of RealityKit and PHASE permanently?
Thank you for any help you provide.
In visionOS, is there a way to temporarily hide the window close/position handle at the bottom of a window?
The Safari app does this, so it must be possible.
I'm writing a RealityKit/ARKit app that runs on iOS.
Starting with Xcode 16.0 beta 1, at least through Xcode 16.1 beta 2 (16B5014f), in the iOS 18 simulator, my app randomly crashes in about 20% of app sessions the first time it attempts to present an ARView.
The crashes seem to occur at multiple points within RealityKit and Metal.
Below, I've included screenshots of the call stacks of the crashes, which occur as a result of both EXC_BAD_ACCESS and assertion failures within RealityKit.
The app only crashes in the iOS 18 simulator, and does not crash in the iOS 17 simulator or earlier.
The app only crashes in the simulator, and does not crash on a device running iOS 18.
Before I investigate further, I'd appreciate it if an Apple engineer could give me a sense of if these crashes are most likely the result of known issues within RealityKit and/or the simulator, or if your opinion is that there are probably bugs in my app's code.
I've submitted several feedback issues in the past, and I'd love to submit this issue too, but I expect that I would spend many hours attempting to create a repro case in a sample app. Understandably, I'd rather not spend this time if an Apple engineer could tell me this is a known issue, for example.
Thank you.
New in iOS 17.5 is UIImpactFeedbackGenerator/impactOccurred(at:), which generates haptic feedback at a specific screen location.
https://developer.apple.com/documentation/uikit/uiimpactfeedbackgenerator/4403143-impactoccurred
Which devices support this?
How does this work if the Taptic Engine has a fixed physical location?