Post

Replies

Boosts

Views

Activity

Reply to RealityView content.add Called Twice Automatically
I can't reproduce this on my end. I just tried a few of my projects and examples and they are working fine. If I add a print line to the make closure of RealityView, I only see it print once. When I inspect the entity graph I see what I expect to see. I'm using visionOS 26 Beta 5 and the latest Xcode. Some other things to think about. You can use a feature in Xcode to capture the entity hierarchy. When you do this, do you see duplicate entities in the same hierarchy or duplicate hierarchies? https://www.gabrieluribe.me/blog/debugging-realitykit-apps-games-visionos Is there anything higher up in your SwiftUI stack that could be causing your RealityView to be created more than once? If you can post some example code showing the issue I may be able to provide more useful help
Topic: Spatial Computing SubTopic: General Tags:
Aug ’25
Reply to How to opt out of tinting widget content on visionOS
@DTS Engineer Can you please be specific? I don't see the answer to my question in either of those articles. How do tell visionOS not to apply a tint color to the background or content of my view? The WWDC session mentions opting out of this when displaying photos, but I don't see how to do that. How do I tell visionOS: "always render this image as it appears without ever trying to tint it."?
Aug ’25
Reply to Is it possible to load a WKWebView that has 3D rendering (like three.js) in a volumetric window?
We can load a Three JS scene in a WebView, but the WebView itself will always be a 2D plane. It would work more like a window looking into the 3D space. The Three JS scene can't fill the volume. The only workaround I know is to download a model as a USDZ. If you needed to let your users place models in their shared space, this could be an option. Three JS as features to export as USDZ. But this exported file would not be a live Three JS scene anymore, just a USDZ file that users could open with QuickLook
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to Look to Scroll
We can use scrollInputBehavior with the .look option From WWDC Session What’s new in visionOS var body: some View { ScrollView { HikeDetails() } .scrollInputBehavior(.enabled, for: .look) }
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to VisionOS26 PresentationComponent not working
Hey, I saw you comment on Step Into Vision too. I'm showing the presentation as a result of a tap gesture in my app. I'm not sure if just setting presentation.isPresented = true here is enough. You are essentially telling something to present before the component has been added. I have no idea if that should work. If you look again at the code from my devlog, I use the GestureComponent to call isPresented based on a user tap. I'm also using this in a volume. You're working in an immersive space. When I tried your code, I saw this error in the Xcode console. "Presentations are not currently supported in Immersive contexts." I thought these were supported, but maybe something has change in Beta 3.
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to GestureComponent does not support DragGesture
Just jumping in to confirm the issue that @eurobob reported. @Vision Pro Engineer suggested adding a minimalDistance, which works for me. There is something else that could be causing issues. As we know, an Entity can only have one instance of a component type assigned at a time. Add a GestureComponent with Gesture B will overwrite a GestureComponent with Gesture A. I don't see a clear way to add more than one Gesture to a GestureComponent. The docs for this one are essentially, so no help there. Is it possible to use this component with more than one gesture? struct Lab: View { var body: some View { RealityView { content in // Load an entity and set it up for input let subject = ModelEntity( mesh: .generateBox(size: 0.2, cornerRadius: 0.01), materials: [SimpleMaterial(color: .stepGreen, isMetallic: false)] ) subject.name = "Subject" subject.components.set(InputTargetComponent()) subject.components.set(HoverEffectComponent()) subject.components .set(CollisionComponent(shapes: [.generateBox(width: 0.2, height: 0.2, depth: 0.2)], isStatic: false)) // This works as long as this is the only gesture we use let gesture = DragGesture(minimumDistance: 0.001) .onChanged({ [weak subject] _ in print("Drag Gesture onChanged for \(subject!.name)") }) .onEnded({ [weak subject] _ in print("Drag Gesture onEnd for \(subject!.name)") }) let gestureComponent = GestureComponent(gesture) subject.components.set(gestureComponent) let tapGesture = TapGesture() .onEnded({ [weak subject] _ in print("Tap Gesture Works for \(String(describing: subject))") }) let gestureComponent2 = GestureComponent(tapGesture) subject.components.set(gestureComponent2) content.add(subject) } } } If I comment out the second gesture component, then the drag gesture works correctly
Topic: Graphics & Games SubTopic: RealityKit Tags:
Jul ’25
Reply to Bouncy ball in RealityKit - game
Regarding restitution Even in the last video here: https://stepinto.vision/example-code/collisions-physics-physics-material/ bounce of the ball is very unnatural - stops after 3-4 bounces. That is because I never set the restitution to a max value. The wood base in that example has a restitution of 1, but the highest value I used on the ball was 0.7. Energy Loss: Despite restitution = 1.0 (perfect elasticity), the ball loses ~20-30% energy per bounce You could set the restitution to 1.0 on both the balls AND the surface(s) they are bouncing on. This won't necessarily be realistic. The balls will essentially bounce infinitely. You'll have to slightly adjust the restitution on one or both of the entity types to make them feel a bit more natural. Of course, a lot will also depend on your mass and friction values.
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to Can we access a "Locked in Place" value when a window has been locked without being snapped to a surface?
@Vision Pro Engineer I'm not talking about interrupting or overriding scene restoration. I'm talking about adapting the window that the user is currently using with other visionOS and SwiftUI features. It terms of this request, I'd still like to understand why this would be better. I could imagine use cases in which someone might prefer to have your application locked in place, but not in the focus mode. This is about user choice. I want to give them the option. I'm not talking about limiting focus mode to locked windows only. But for a lot of uses, it would be a better user experience to auto-enable focus mode for locked windows. Without this the user has to (1) lock the window manually enable focus mode). Giving them the option to do both of these just by performing one action is better than always requiring them to do both. I can already deliver the experience that I want with the existing APIs for snapped windows. I want to provide the same experience for free, floating window. Regardless of intention, users will think about locked windows in exactly the same way they will think about snapped windows. The only difference between these two is one is bound to a surface and the other is free floating. Other than that, they should behave exactly the same. Without access to this value, they will always have to be treated differently.
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25
Reply to Can we access a "Locked in Place" value when a window has been locked without being snapped to a surface?
@Vision Pro Engineer "Locking in a window in place shouldn't result in visual changes within your application" "so locking-in-place should not be special-cased" These are just your opinions though, and I disagree. I have valid uses where this would make sense. I described one of them in my earlier comment. I have an app where a certain type of window has what call "focus mode". With visionOS 26 window locking, I would like to enable focus mode for any locked window. Currently the user would have to lock the window, then use controls inside the window to enable focus mode. It would be better if I could offer them the option to enable focus mode when the window is locked. I don't understand why you're pushing back on this. It's a simple feature request for a value that we can already get for snapped windows. Why do you think we shouldn't be able to access the same value for unsnapped free floating windows?
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25
Reply to Alternatives to SceneView
RealityKit and RealityView are my suggestions. I think you were on the right track with gestures + RealityView: I tried to add Gestures to the RealityView on iOS - loading USDZ 3D models worked but the gestures didn’t). We can use SwiftUI gestures with RealityKit entities. Things like TapGesture, DragGesture, etc. There is a bit of work needed to make these work with RealityKit. Load you model in a RealityView as an entity Add components to the entity: InputTargetComponent and CollisionComponent are both required to use system gestures with entities. The gesture code needs to target entities Example using targetedToAnyEntity. var tapExample: some Gesture { TapGesture() .targetedToAnyEntity() // 3. make sure to use this line to target entities .onEnded { value in if selected === value.entity { // If the same entity is tapped, lower it and deselect. selected?.position.y = 0 selected = nil } else { // Lower the previously selected entity (if any). selected?.position.y = 0 // Raise the new entity and select it. value.entity.position.y = 0.1 selected = value.entity } } }
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25
Reply to Vision OS: HUD mode windows
@DTS Engineer is correct that DeviceAnchor is the way to go. You asked about having Windows track head movement. That isn't possible, but you can get pretty close to the behavior you are looking for by providing an attachment with the SwiftUI view, then using DeviceAnchor to move the attachment entity. Bonus: Use Billboard to make the attachment face the user head-on Use move(to) with a slight delay to make this attachment entity smoothly move with the user. This will create a sort of "rubber band" effect that is a lot more pleasing then a hud that is fixed in place. Fixed huds can feel both laggy and claustrophobic.
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25