Post

Replies

Boosts

Views

Activity

Reply to Can't build old project on Xcode 26 beta5
I ran into something like this in Beta 1 back in June. At first I thought it was something related to RealityKitContent as that is where Xcode seemed to be stuck. It ended up not having anything to do with RealityKit. One of my example code files demonstrated how to use every ornament anchor. This worked well on previous versions, but Xcode 26 could not compile it. Can you look at your SwiftUI code and see if there is anything with lots of ornaments, toolbars, that sort of thing? You may want to start commenting out sections of the view hierarchy and see if you can get it to build. Once it builds, start re-activating views until you can cause the issue again. That may help you find the view that Xcode is getting stuck on.
Topic: Spatial Computing SubTopic: General Tags:
Aug ’25
Reply to RealityView content.add Called Twice Automatically
I can't reproduce this on my end. I just tried a few of my projects and examples and they are working fine. If I add a print line to the make closure of RealityView, I only see it print once. When I inspect the entity graph I see what I expect to see. I'm using visionOS 26 Beta 5 and the latest Xcode. Some other things to think about. You can use a feature in Xcode to capture the entity hierarchy. When you do this, do you see duplicate entities in the same hierarchy or duplicate hierarchies? https://www.gabrieluribe.me/blog/debugging-realitykit-apps-games-visionos Is there anything higher up in your SwiftUI stack that could be causing your RealityView to be created more than once? If you can post some example code showing the issue I may be able to provide more useful help
Topic: Spatial Computing SubTopic: General Tags:
Aug ’25
Reply to How to opt out of tinting widget content on visionOS
@DTS Engineer Can you please be specific? I don't see the answer to my question in either of those articles. How do tell visionOS not to apply a tint color to the background or content of my view? The WWDC session mentions opting out of this when displaying photos, but I don't see how to do that. How do I tell visionOS: "always render this image as it appears without ever trying to tint it."?
Aug ’25
Reply to Is it possible to load a WKWebView that has 3D rendering (like three.js) in a volumetric window?
We can load a Three JS scene in a WebView, but the WebView itself will always be a 2D plane. It would work more like a window looking into the 3D space. The Three JS scene can't fill the volume. The only workaround I know is to download a model as a USDZ. If you needed to let your users place models in their shared space, this could be an option. Three JS as features to export as USDZ. But this exported file would not be a live Three JS scene anymore, just a USDZ file that users could open with QuickLook
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to Look to Scroll
We can use scrollInputBehavior with the .look option From WWDC Session What’s new in visionOS var body: some View { ScrollView { HikeDetails() } .scrollInputBehavior(.enabled, for: .look) }
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to VisionOS26 PresentationComponent not working
Hey, I saw you comment on Step Into Vision too. I'm showing the presentation as a result of a tap gesture in my app. I'm not sure if just setting presentation.isPresented = true here is enough. You are essentially telling something to present before the component has been added. I have no idea if that should work. If you look again at the code from my devlog, I use the GestureComponent to call isPresented based on a user tap. I'm also using this in a volume. You're working in an immersive space. When I tried your code, I saw this error in the Xcode console. "Presentations are not currently supported in Immersive contexts." I thought these were supported, but maybe something has change in Beta 3.
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to GestureComponent does not support DragGesture
Just jumping in to confirm the issue that @eurobob reported. @Vision Pro Engineer suggested adding a minimalDistance, which works for me. There is something else that could be causing issues. As we know, an Entity can only have one instance of a component type assigned at a time. Add a GestureComponent with Gesture B will overwrite a GestureComponent with Gesture A. I don't see a clear way to add more than one Gesture to a GestureComponent. The docs for this one are essentially, so no help there. Is it possible to use this component with more than one gesture? struct Lab: View { var body: some View { RealityView { content in // Load an entity and set it up for input let subject = ModelEntity( mesh: .generateBox(size: 0.2, cornerRadius: 0.01), materials: [SimpleMaterial(color: .stepGreen, isMetallic: false)] ) subject.name = "Subject" subject.components.set(InputTargetComponent()) subject.components.set(HoverEffectComponent()) subject.components .set(CollisionComponent(shapes: [.generateBox(width: 0.2, height: 0.2, depth: 0.2)], isStatic: false)) // This works as long as this is the only gesture we use let gesture = DragGesture(minimumDistance: 0.001) .onChanged({ [weak subject] _ in print("Drag Gesture onChanged for \(subject!.name)") }) .onEnded({ [weak subject] _ in print("Drag Gesture onEnd for \(subject!.name)") }) let gestureComponent = GestureComponent(gesture) subject.components.set(gestureComponent) let tapGesture = TapGesture() .onEnded({ [weak subject] _ in print("Tap Gesture Works for \(String(describing: subject))") }) let gestureComponent2 = GestureComponent(tapGesture) subject.components.set(gestureComponent2) content.add(subject) } } } If I comment out the second gesture component, then the drag gesture works correctly
Topic: Graphics & Games SubTopic: RealityKit Tags:
Jul ’25
Reply to Bouncy ball in RealityKit - game
Regarding restitution Even in the last video here: https://stepinto.vision/example-code/collisions-physics-physics-material/ bounce of the ball is very unnatural - stops after 3-4 bounces. That is because I never set the restitution to a max value. The wood base in that example has a restitution of 1, but the highest value I used on the ball was 0.7. Energy Loss: Despite restitution = 1.0 (perfect elasticity), the ball loses ~20-30% energy per bounce You could set the restitution to 1.0 on both the balls AND the surface(s) they are bouncing on. This won't necessarily be realistic. The balls will essentially bounce infinitely. You'll have to slightly adjust the restitution on one or both of the entity types to make them feel a bit more natural. Of course, a lot will also depend on your mass and friction values.
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to Can we access a "Locked in Place" value when a window has been locked without being snapped to a surface?
@Vision Pro Engineer I'm not talking about interrupting or overriding scene restoration. I'm talking about adapting the window that the user is currently using with other visionOS and SwiftUI features. It terms of this request, I'd still like to understand why this would be better. I could imagine use cases in which someone might prefer to have your application locked in place, but not in the focus mode. This is about user choice. I want to give them the option. I'm not talking about limiting focus mode to locked windows only. But for a lot of uses, it would be a better user experience to auto-enable focus mode for locked windows. Without this the user has to (1) lock the window manually enable focus mode). Giving them the option to do both of these just by performing one action is better than always requiring them to do both. I can already deliver the experience that I want with the existing APIs for snapped windows. I want to provide the same experience for free, floating window. Regardless of intention, users will think about locked windows in exactly the same way they will think about snapped windows. The only difference between these two is one is bound to a surface and the other is free floating. Other than that, they should behave exactly the same. Without access to this value, they will always have to be treated differently.
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25
Reply to Can we access a "Locked in Place" value when a window has been locked without being snapped to a surface?
@Vision Pro Engineer "Locking in a window in place shouldn't result in visual changes within your application" "so locking-in-place should not be special-cased" These are just your opinions though, and I disagree. I have valid uses where this would make sense. I described one of them in my earlier comment. I have an app where a certain type of window has what call "focus mode". With visionOS 26 window locking, I would like to enable focus mode for any locked window. Currently the user would have to lock the window, then use controls inside the window to enable focus mode. It would be better if I could offer them the option to enable focus mode when the window is locked. I don't understand why you're pushing back on this. It's a simple feature request for a value that we can already get for snapped windows. Why do you think we shouldn't be able to access the same value for unsnapped free floating windows?
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25
Reply to Alternatives to SceneView
RealityKit and RealityView are my suggestions. I think you were on the right track with gestures + RealityView: I tried to add Gestures to the RealityView on iOS - loading USDZ 3D models worked but the gestures didn’t). We can use SwiftUI gestures with RealityKit entities. Things like TapGesture, DragGesture, etc. There is a bit of work needed to make these work with RealityKit. Load you model in a RealityView as an entity Add components to the entity: InputTargetComponent and CollisionComponent are both required to use system gestures with entities. The gesture code needs to target entities Example using targetedToAnyEntity. var tapExample: some Gesture { TapGesture() .targetedToAnyEntity() // 3. make sure to use this line to target entities .onEnded { value in if selected === value.entity { // If the same entity is tapped, lower it and deselect. selected?.position.y = 0 selected = nil } else { // Lower the previously selected entity (if any). selected?.position.y = 0 // Raise the new entity and select it. value.entity.position.y = 0.1 selected = value.entity } } }
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25
Reply to Can't build old project on Xcode 26 beta5
I ran into something like this in Beta 1 back in June. At first I thought it was something related to RealityKitContent as that is where Xcode seemed to be stuck. It ended up not having anything to do with RealityKit. One of my example code files demonstrated how to use every ornament anchor. This worked well on previous versions, but Xcode 26 could not compile it. Can you look at your SwiftUI code and see if there is anything with lots of ornaments, toolbars, that sort of thing? You may want to start commenting out sections of the view hierarchy and see if you can get it to build. Once it builds, start re-activating views until you can cause the issue again. That may help you find the view that Xcode is getting stuck on.
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Aug ’25
Reply to RealityView content.add Called Twice Automatically
I can't reproduce this on my end. I just tried a few of my projects and examples and they are working fine. If I add a print line to the make closure of RealityView, I only see it print once. When I inspect the entity graph I see what I expect to see. I'm using visionOS 26 Beta 5 and the latest Xcode. Some other things to think about. You can use a feature in Xcode to capture the entity hierarchy. When you do this, do you see duplicate entities in the same hierarchy or duplicate hierarchies? https://www.gabrieluribe.me/blog/debugging-realitykit-apps-games-visionos Is there anything higher up in your SwiftUI stack that could be causing your RealityView to be created more than once? If you can post some example code showing the issue I may be able to provide more useful help
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Aug ’25
Reply to spatial-backdrop feature available yet?
I tried this just after WWDC and it works reasonably well. There is a Safari Flag to enable this for Then, look for the option in the menu to the left of the URL – Joseph
Replies
Boosts
Views
Activity
Aug ’25
Reply to How to opt out of tinting widget content on visionOS
@DTS Engineer Can you please be specific? I don't see the answer to my question in either of those articles. How do tell visionOS not to apply a tint color to the background or content of my view? The WWDC session mentions opting out of this when displaying photos, but I don't see how to do that. How do I tell visionOS: "always render this image as it appears without ever trying to tint it."?
Replies
Boosts
Views
Activity
Aug ’25
Reply to Is it possible to load a WKWebView that has 3D rendering (like three.js) in a volumetric window?
We can load a Three JS scene in a WebView, but the WebView itself will always be a 2D plane. It would work more like a window looking into the 3D space. The Three JS scene can't fill the volume. The only workaround I know is to download a model as a USDZ. If you needed to let your users place models in their shared space, this could be an option. Three JS as features to export as USDZ. But this exported file would not be a live Three JS scene anymore, just a USDZ file that users could open with QuickLook
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jul ’25
Reply to Look to Scroll
We can use scrollInputBehavior with the .look option From WWDC Session What’s new in visionOS var body: some View { ScrollView { HikeDetails() } .scrollInputBehavior(.enabled, for: .look) }
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jul ’25
Reply to VisionOS26 PresentationComponent not working
Hey, I saw you comment on Step Into Vision too. I'm showing the presentation as a result of a tap gesture in my app. I'm not sure if just setting presentation.isPresented = true here is enough. You are essentially telling something to present before the component has been added. I have no idea if that should work. If you look again at the code from my devlog, I use the GestureComponent to call isPresented based on a user tap. I'm also using this in a volume. You're working in an immersive space. When I tried your code, I saw this error in the Xcode console. "Presentations are not currently supported in Immersive contexts." I thought these were supported, but maybe something has change in Beta 3.
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jul ’25
Reply to Portal crossing causes inconsistent lighting and visual artifacts between virtual and real spaces (visionOS 2.0)
Have you tried using EnvironmentLightingConfigurationComponent? https://developer.apple.com/documentation/realitykit/environmentlightingconfigurationcomponent Apple talked using this with portal crossing at WWDC 2024 https://developer.apple.com/videos/play/wwdc2024/10103?time=1473
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jul ’25
Reply to GestureComponent does not support DragGesture
Just jumping in to confirm the issue that @eurobob reported. @Vision Pro Engineer suggested adding a minimalDistance, which works for me. There is something else that could be causing issues. As we know, an Entity can only have one instance of a component type assigned at a time. Add a GestureComponent with Gesture B will overwrite a GestureComponent with Gesture A. I don't see a clear way to add more than one Gesture to a GestureComponent. The docs for this one are essentially, so no help there. Is it possible to use this component with more than one gesture? struct Lab: View { var body: some View { RealityView { content in // Load an entity and set it up for input let subject = ModelEntity( mesh: .generateBox(size: 0.2, cornerRadius: 0.01), materials: [SimpleMaterial(color: .stepGreen, isMetallic: false)] ) subject.name = "Subject" subject.components.set(InputTargetComponent()) subject.components.set(HoverEffectComponent()) subject.components .set(CollisionComponent(shapes: [.generateBox(width: 0.2, height: 0.2, depth: 0.2)], isStatic: false)) // This works as long as this is the only gesture we use let gesture = DragGesture(minimumDistance: 0.001) .onChanged({ [weak subject] _ in print("Drag Gesture onChanged for \(subject!.name)") }) .onEnded({ [weak subject] _ in print("Drag Gesture onEnd for \(subject!.name)") }) let gestureComponent = GestureComponent(gesture) subject.components.set(gestureComponent) let tapGesture = TapGesture() .onEnded({ [weak subject] _ in print("Tap Gesture Works for \(String(describing: subject))") }) let gestureComponent2 = GestureComponent(tapGesture) subject.components.set(gestureComponent2) content.add(subject) } } } If I comment out the second gesture component, then the drag gesture works correctly
Topic: Graphics & Games SubTopic: RealityKit Tags:
Replies
Boosts
Views
Activity
Jul ’25
Reply to When placing a TextField within a RealityViewAttachment, the virtual keyboard does not appear in front of the user as expected.
Boosting this one. Hopefully an Apple engineer can tell us if there is a workaround. I've run into this too. In seems like visionOS places the keyboard relative to the host scene (a volume in my case) instead of near the attachment hosting the TextField.
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jul ’25
Reply to Bouncy ball in RealityKit - game
Regarding restitution Even in the last video here: https://stepinto.vision/example-code/collisions-physics-physics-material/ bounce of the ball is very unnatural - stops after 3-4 bounces. That is because I never set the restitution to a max value. The wood base in that example has a restitution of 1, but the highest value I used on the ball was 0.7. Energy Loss: Despite restitution = 1.0 (perfect elasticity), the ball loses ~20-30% energy per bounce You could set the restitution to 1.0 on both the balls AND the surface(s) they are bouncing on. This won't necessarily be realistic. The balls will essentially bounce infinitely. You'll have to slightly adjust the restitution on one or both of the entity types to make them feel a bit more natural. Of course, a lot will also depend on your mass and friction values.
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jul ’25
Reply to Can we access a "Locked in Place" value when a window has been locked without being snapped to a surface?
@Vision Pro Engineer I'm not talking about interrupting or overriding scene restoration. I'm talking about adapting the window that the user is currently using with other visionOS and SwiftUI features. It terms of this request, I'd still like to understand why this would be better. I could imagine use cases in which someone might prefer to have your application locked in place, but not in the focus mode. This is about user choice. I want to give them the option. I'm not talking about limiting focus mode to locked windows only. But for a lot of uses, it would be a better user experience to auto-enable focus mode for locked windows. Without this the user has to (1) lock the window manually enable focus mode). Giving them the option to do both of these just by performing one action is better than always requiring them to do both. I can already deliver the experience that I want with the existing APIs for snapped windows. I want to provide the same experience for free, floating window. Regardless of intention, users will think about locked windows in exactly the same way they will think about snapped windows. The only difference between these two is one is bound to a surface and the other is free floating. Other than that, they should behave exactly the same. Without access to this value, they will always have to be treated differently.
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jun ’25
Reply to Can we access a "Locked in Place" value when a window has been locked without being snapped to a surface?
@Vision Pro Engineer "Locking in a window in place shouldn't result in visual changes within your application" "so locking-in-place should not be special-cased" These are just your opinions though, and I disagree. I have valid uses where this would make sense. I described one of them in my earlier comment. I have an app where a certain type of window has what call "focus mode". With visionOS 26 window locking, I would like to enable focus mode for any locked window. Currently the user would have to lock the window, then use controls inside the window to enable focus mode. It would be better if I could offer them the option to enable focus mode when the window is locked. I don't understand why you're pushing back on this. It's a simple feature request for a value that we can already get for snapped windows. Why do you think we shouldn't be able to access the same value for unsnapped free floating windows?
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jun ’25
Reply to Can we access a "Locked in Place" value when a window has been locked without being snapped to a surface?
@Vision Pro Engineer Feedback filed: FB18351716 There are a lot of uses for this. We could adapt the window to this state. For example, hiding other window elements, toolbars, and ornaments. Or provide alternatives to a "locked in place" state.
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jun ’25
Reply to Alternatives to SceneView
RealityKit and RealityView are my suggestions. I think you were on the right track with gestures + RealityView: I tried to add Gestures to the RealityView on iOS - loading USDZ 3D models worked but the gestures didn’t). We can use SwiftUI gestures with RealityKit entities. Things like TapGesture, DragGesture, etc. There is a bit of work needed to make these work with RealityKit. Load you model in a RealityView as an entity Add components to the entity: InputTargetComponent and CollisionComponent are both required to use system gestures with entities. The gesture code needs to target entities Example using targetedToAnyEntity. var tapExample: some Gesture { TapGesture() .targetedToAnyEntity() // 3. make sure to use this line to target entities .onEnded { value in if selected === value.entity { // If the same entity is tapped, lower it and deselect. selected?.position.y = 0 selected = nil } else { // Lower the previously selected entity (if any). selected?.position.y = 0 // Raise the new entity and select it. value.entity.position.y = 0.1 selected = value.entity } } }
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jun ’25