Post

Replies

Boosts

Views

Activity

Reply to RealityView content.add Called Twice Automatically
I made a standalone project based on your code check this out. The good news that the entity is not being duplicated. Let's look at the cod first. struct ImmersiveView: View { @State var launchEntity = Entity() var body: some View { RealityView { content in print("RealityView Content Loaded!") // Called Once launchEntity = createLaunchEntity() content.add(launchEntity) } } func createLaunchEntity() -> Entity { //... let entity: Entity = Entity() entity.name = "Launch Entity" let launchviewComponent = ViewAttachmentComponent( rootView: LaunchView() ) entity.components.set(launchviewComponent) print("createLaunchEntity was called") // Called Once return entity } } struct LaunchView: View { var body: some View { VStack { Text("Launch View") } .padding() .onAppear() { print("Launch view appears!") // Called Twice } } } When I load this immersive space I see 4 lines in the console RealityView Content Loaded! createLaunchEntity was called Launch view appears! Launch view appears! What happened RealityView content was called once createLaunchEntity The onAppear closure in the LaunchView was called twice. That made me wonder if ViewAttachmentComponent is calling LaunchView more than once and yes, that appears to be the case. When I create the attachment using the attachments closure on RealityView, onAppear is only called once. var body: some View { RealityView { content, attachments in print("RealityView Content Loaded!") // Called Once content.add(launchEntity) if let attachment = attachments.entity(for: "Test") { launchEntity = attachment content.add(launchEntity) } } update: { content, attachments in } attachments: { Attachment(id: "Test", { LaunchView() }) } } So the good news is that we don't see duplicate entities in the graph. The bad news is that ViewAttachmentComponent is calling the SwiftUI more than once. I suggest you file a bug with Feedback Assistant. In the meantime, you could using RealityView to create attachments. If your view is just presenting data, then I suppose you could just ignore this.
Topic: Spatial Computing SubTopic: General Tags:
1w
Reply to Can't build old project on Xcode 26 beta5
I ran into something like this in Beta 1 back in June. At first I thought it was something related to RealityKitContent as that is where Xcode seemed to be stuck. It ended up not having anything to do with RealityKit. One of my example code files demonstrated how to use every ornament anchor. This worked well on previous versions, but Xcode 26 could not compile it. Can you look at your SwiftUI code and see if there is anything with lots of ornaments, toolbars, that sort of thing? You may want to start commenting out sections of the view hierarchy and see if you can get it to build. Once it builds, start re-activating views until you can cause the issue again. That may help you find the view that Xcode is getting stuck on.
Topic: Spatial Computing SubTopic: General Tags:
1w
Reply to RealityView content.add Called Twice Automatically
I can't reproduce this on my end. I just tried a few of my projects and examples and they are working fine. If I add a print line to the make closure of RealityView, I only see it print once. When I inspect the entity graph I see what I expect to see. I'm using visionOS 26 Beta 5 and the latest Xcode. Some other things to think about. You can use a feature in Xcode to capture the entity hierarchy. When you do this, do you see duplicate entities in the same hierarchy or duplicate hierarchies? https://www.gabrieluribe.me/blog/debugging-realitykit-apps-games-visionos Is there anything higher up in your SwiftUI stack that could be causing your RealityView to be created more than once? If you can post some example code showing the issue I may be able to provide more useful help
Topic: Spatial Computing SubTopic: General Tags:
2w
Reply to How to opt out of tinting widget content on visionOS
@DTS Engineer Can you please be specific? I don't see the answer to my question in either of those articles. How do tell visionOS not to apply a tint color to the background or content of my view? The WWDC session mentions opting out of this when displaying photos, but I don't see how to do that. How do I tell visionOS: "always render this image as it appears without ever trying to tint it."?
2w
Reply to Is it possible to load a WKWebView that has 3D rendering (like three.js) in a volumetric window?
We can load a Three JS scene in a WebView, but the WebView itself will always be a 2D plane. It would work more like a window looking into the 3D space. The Three JS scene can't fill the volume. The only workaround I know is to download a model as a USDZ. If you needed to let your users place models in their shared space, this could be an option. Three JS as features to export as USDZ. But this exported file would not be a live Three JS scene anymore, just a USDZ file that users could open with QuickLook
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to Look to Scroll
We can use scrollInputBehavior with the .look option From WWDC Session What’s new in visionOS var body: some View { ScrollView { HikeDetails() } .scrollInputBehavior(.enabled, for: .look) }
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to VisionOS26 PresentationComponent not working
Hey, I saw you comment on Step Into Vision too. I'm showing the presentation as a result of a tap gesture in my app. I'm not sure if just setting presentation.isPresented = true here is enough. You are essentially telling something to present before the component has been added. I have no idea if that should work. If you look again at the code from my devlog, I use the GestureComponent to call isPresented based on a user tap. I'm also using this in a volume. You're working in an immersive space. When I tried your code, I saw this error in the Xcode console. "Presentations are not currently supported in Immersive contexts." I thought these were supported, but maybe something has change in Beta 3.
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to GestureComponent does not support DragGesture
Just jumping in to confirm the issue that @eurobob reported. @Vision Pro Engineer suggested adding a minimalDistance, which works for me. There is something else that could be causing issues. As we know, an Entity can only have one instance of a component type assigned at a time. Add a GestureComponent with Gesture B will overwrite a GestureComponent with Gesture A. I don't see a clear way to add more than one Gesture to a GestureComponent. The docs for this one are essentially, so no help there. Is it possible to use this component with more than one gesture? struct Lab: View { var body: some View { RealityView { content in // Load an entity and set it up for input let subject = ModelEntity( mesh: .generateBox(size: 0.2, cornerRadius: 0.01), materials: [SimpleMaterial(color: .stepGreen, isMetallic: false)] ) subject.name = "Subject" subject.components.set(InputTargetComponent()) subject.components.set(HoverEffectComponent()) subject.components .set(CollisionComponent(shapes: [.generateBox(width: 0.2, height: 0.2, depth: 0.2)], isStatic: false)) // This works as long as this is the only gesture we use let gesture = DragGesture(minimumDistance: 0.001) .onChanged({ [weak subject] _ in print("Drag Gesture onChanged for \(subject!.name)") }) .onEnded({ [weak subject] _ in print("Drag Gesture onEnd for \(subject!.name)") }) let gestureComponent = GestureComponent(gesture) subject.components.set(gestureComponent) let tapGesture = TapGesture() .onEnded({ [weak subject] _ in print("Tap Gesture Works for \(String(describing: subject))") }) let gestureComponent2 = GestureComponent(tapGesture) subject.components.set(gestureComponent2) content.add(subject) } } } If I comment out the second gesture component, then the drag gesture works correctly
Topic: Graphics & Games SubTopic: RealityKit Tags:
Jul ’25
Reply to Bouncy ball in RealityKit - game
Regarding restitution Even in the last video here: https://stepinto.vision/example-code/collisions-physics-physics-material/ bounce of the ball is very unnatural - stops after 3-4 bounces. That is because I never set the restitution to a max value. The wood base in that example has a restitution of 1, but the highest value I used on the ball was 0.7. Energy Loss: Despite restitution = 1.0 (perfect elasticity), the ball loses ~20-30% energy per bounce You could set the restitution to 1.0 on both the balls AND the surface(s) they are bouncing on. This won't necessarily be realistic. The balls will essentially bounce infinitely. You'll have to slightly adjust the restitution on one or both of the entity types to make them feel a bit more natural. Of course, a lot will also depend on your mass and friction values.
Topic: Spatial Computing SubTopic: General Tags:
Jul ’25
Reply to Can we access a "Locked in Place" value when a window has been locked without being snapped to a surface?
@Vision Pro Engineer I'm not talking about interrupting or overriding scene restoration. I'm talking about adapting the window that the user is currently using with other visionOS and SwiftUI features. It terms of this request, I'd still like to understand why this would be better. I could imagine use cases in which someone might prefer to have your application locked in place, but not in the focus mode. This is about user choice. I want to give them the option. I'm not talking about limiting focus mode to locked windows only. But for a lot of uses, it would be a better user experience to auto-enable focus mode for locked windows. Without this the user has to (1) lock the window manually enable focus mode). Giving them the option to do both of these just by performing one action is better than always requiring them to do both. I can already deliver the experience that I want with the existing APIs for snapped windows. I want to provide the same experience for free, floating window. Regardless of intention, users will think about locked windows in exactly the same way they will think about snapped windows. The only difference between these two is one is bound to a surface and the other is free floating. Other than that, they should behave exactly the same. Without access to this value, they will always have to be treated differently.
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25
Reply to Can we access a "Locked in Place" value when a window has been locked without being snapped to a surface?
@Vision Pro Engineer "Locking in a window in place shouldn't result in visual changes within your application" "so locking-in-place should not be special-cased" These are just your opinions though, and I disagree. I have valid uses where this would make sense. I described one of them in my earlier comment. I have an app where a certain type of window has what call "focus mode". With visionOS 26 window locking, I would like to enable focus mode for any locked window. Currently the user would have to lock the window, then use controls inside the window to enable focus mode. It would be better if I could offer them the option to enable focus mode when the window is locked. I don't understand why you're pushing back on this. It's a simple feature request for a value that we can already get for snapped windows. Why do you think we shouldn't be able to access the same value for unsnapped free floating windows?
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25