Post

Replies

Boosts

Views

Activity

Reply to Vision OS: HUD mode windows
@DTS Engineer is correct that DeviceAnchor is the way to go. You asked about having Windows track head movement. That isn't possible, but you can get pretty close to the behavior you are looking for by providing an attachment with the SwiftUI view, then using DeviceAnchor to move the attachment entity. Bonus: Use Billboard to make the attachment face the user head-on Use move(to) with a slight delay to make this attachment entity smoothly move with the user. This will create a sort of "rubber band" effect that is a lot more pleasing then a hud that is fixed in place. Fixed huds can feel both laggy and claustrophobic.
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25
Reply to Issue: Closing Bounded Volume Never Re-Opens
I'm not working with Unity, but it sounds like a similar issue we have with native windows and volumes. visionOS will open the last opened "scene" † You may want to check with Unity support to see if they have bridged the APIs needed to manage scene state. If you have access to the Swift file that creates each window and volume, or the views at their root level, then you may be able to work with APIs like ScenePhase to monitor the status of scene scene type. I do this in one of my apps. The main content is in a volume, with a small utility window for editing. If the volumes gets closed, the utility window can notice the change and offer a means to reopen the volume. Read about scene phase: https://developer.apple.com/documentation/swiftui/scenephase Basic example of using scene phase to track the state of two windows: https://stepinto.vision/example-code/how-to-use-scene-phase-to-track-and-manage-window-state/ † scene in this case means Windows, Volumes, and Spaces, not 3D scene graphs
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25
Reply to Launching a Unity fully immersive game from SwiftUI
I'm not working with Unity, so take this with a grain of salt. You might look at the info.plist to make sure it can open immersive spaces. Xcode sets this automatically when creating projects from one of the visionOS templates, but if you or Unity created the project manually, this may need to be added. This is what it looks like in a very simple app that can support multiple windows and immersive spaces. More info: https://developer.apple.com/documentation/BundleResources/Information-Property-List/UIApplicationSceneManifest/UISceneConfigurations/UISceneSessionRoleImmersiveSpaceApplication
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25
Reply to visionos Create a window, set the window size, tilt angle, and position
A few tips for you. Placing windows We can set the default position of a new window. We can position them relative to other windows or use utility window. If you want to open a small window below the user, tilted up to face them, then you want .utility. Example WindowGroup(id: "Rocks") { Text("🪨🪨🪨") .font(.system(size: 48)) } .defaultSize(CGSize(width: 300, height: 100)) .defaultWindowPlacement { _, context in return WindowPlacement(.utilityPanel) } If you want to place window relative to another window, then you can do something like this // Main window WindowGroup { ContentView() } .defaultSize(width: 500, height: 500) WindowGroup(id: "YellowFlower") { Text("🌼") .font(.system(size: 128)) } .defaultSize(CGSize(width: 300, height: 200)) .defaultWindowPlacement { _, context in if let mainWindow = context.windows.first { return WindowPlacement(.leading(mainWindow)) } return WindowPlacement(.none) Unfortunately, as of visionOS 2, we can't move or position windows after they are already opened. The features above work once when we open a new window. Sizing windows You can control the default size like this WindowGroup(id: "YellowFlower") { YellowFlowerView() } .defaultSize(CGSize(width: 600, height: 600)) If you need more control, you can use .contentSize to make the window adapt to the size of the view/content. You can even adjust the size of the view and the window will adapt. WindowGroup(id: "YellowFlower") { YellowFlowerView() .frame(minWidth: 500, maxWidth: 700, minHeight: 500, maxHeight: 700) } .windowResizability(.contentSize) .defaultSize(CGSize(width: 600, height: 600)) Hope this helps!
Topic: Spatial Computing SubTopic: General Tags:
May ’25
Reply to Collision Detection Fails After Anchoring ModelEntity to Hand in VisionOS
By default, AnchorEntity and entities with Anchoring Component exist in their own physics space, so they don't collide with other physics objects. We can opt out of this by setting physicsSimulation to .none let handAnchor = AnchorEntity(.hand(.left, location: .palm), trackingMode: .continuous) // Add this line handAnchor.anchoring.physicsSimulation = .none handAnchor.addChild(box) content.add(handAnchor) // Add the hand anchor to the scene.
May ’25
Reply to Launching a timeline on a specific model via notification
Boosting this because I would love to know too. When create timelines, all the actions in the timeline specify the entity they work on. I don't know if there is a way to do this, but I hope there is. In the meantime, if you want to do these actions in code without the Timelines, we can use Entity Actions. This lets us create an action and call it on an entity. The has been my go-to workaround for now. It can still take a bit of work to chain multiple actions together.
May ’25
Reply to ARKit Planes do not appear where expected on visionOS
I think figured out what I was doing wrong. I was using the extents of the anchor to create meshes, then placing the meshes at the transform for the anchor. I was expecting an anchor plane to be something I could turn into a geometric plane. Diving deeper into it, anchors are not planes the sense that a plane mesh is. These anchors are actually n-gons that don't necessarily line up with shape of a plane. Apple has an example project that creates these, but applies an occlusion material. I swapped that for a material with random colors so I could visualize what is happening. Each anchor has a n-gon, represented with meshVertices. The example project used some extensions to use that data to create shapes for the meshes. Personally, I found the example project difficult to understand. It has way too much abstraction and stashes the good stuff in extensions. Here is a modified version of the example from the first post, without the abstractions from the Apple example project. I'd love to hear any more ideas from you folks. How can we improve this? import SwiftUI import RealityKit import RealityKitContent import ARKit struct Example068: View { @State var session = ARKitSession() @State private var planeAnchors: [UUID: Entity] = [:] @State private var planeColors: [UUID: Color] = [:] var body: some View { RealityView { content in } update: { content in for (_, entity) in planeAnchors { if !content.entities.contains(entity) { content.add(entity) } } } .task { try! await setupAndRunPlaneDetection() } } func setupAndRunPlaneDetection() async throws { let planeData = PlaneDetectionProvider(alignments: [.horizontal, .vertical, .slanted]) if PlaneDetectionProvider.isSupported { do { try await session.run([planeData]) for await update in planeData.anchorUpdates { switch update.event { case .added, .updated: let anchor = update.anchor if planeColors[anchor.id] == nil { planeColors[anchor.id] = generatePastelColor() } let planeEntity = createPlaneEntity(for: anchor, color: planeColors[anchor.id]!) planeAnchors[anchor.id] = planeEntity case .removed: let anchor = update.anchor planeAnchors.removeValue(forKey: anchor.id) planeColors.removeValue(forKey: anchor.id) } } } catch { print("ARKit session error \(error)") } } } private func generatePastelColor() -> Color { let hue = Double.random(in: 0...1) let saturation = Double.random(in: 0.2...0.4) let brightness = Double.random(in: 0.8...1.0) return Color(hue: hue, saturation: saturation, brightness: brightness) } private func createPlaneEntity(for anchor: PlaneAnchor, color: Color) -> Entity { let entity = Entity() entity.name = "Plane \(anchor.id)" entity.setTransformMatrix(anchor.originFromAnchorTransform, relativeTo: nil) // Generate a mesh for the plane (for occlusion). var meshResource: MeshResource? = nil do { var contents = MeshResource.Contents() contents.instances = [MeshResource.Instance(id: "main", model: "model")] var part = MeshResource.Part(id: "part", materialIndex: 0) // Convert vertices to SIMD3<Float> let vertices = anchor.geometry.meshVertices var vertexArray: [SIMD3<Float>] = [] for i in 0..<vertices.count { let vertex = vertices.buffer.contents().advanced(by: vertices.offset + vertices.stride * i).assumingMemoryBound(to: (Float, Float, Float).self).pointee vertexArray.append(SIMD3<Float>(vertex.0, vertex.1, vertex.2)) } part.positions = MeshBuffers.Positions(vertexArray) // Convert faces to UInt32 let faces = anchor.geometry.meshFaces var faceArray: [UInt32] = [] let totalFaces = faces.count * faces.primitive.indexCount for i in 0..<totalFaces { let face = faces.buffer.contents().advanced(by: i * MemoryLayout<Int32>.size).assumingMemoryBound(to: Int32.self).pointee faceArray.append(UInt32(face)) } part.triangleIndices = MeshBuffer(faceArray) contents.models = [MeshResource.Model(id: "model", parts: [part])] meshResource = try MeshResource.generate(from: contents) } catch { print("Failed to create a mesh resource for a plane anchor: \(error).") } var material = PhysicallyBasedMaterial() material.baseColor.tint = UIColor(color) if let meshResource { entity.components.set(ModelComponent(mesh: meshResource, materials: [material])) } return entity } } #Preview { Example068() }
Apr ’25
Reply to [VisionPro] Placing virtual entities around my arm
Hi, we can use ARKit hand anchors to add items to users hands. There are a ton of anchors for each finger, palm, wrist, etc. But I don't think we get access to arms other than hands. If you need more information on hands, check out these two posts on the forum https://developer.apple.com/forums/thread/774522?answerId=825213022#825213022 https://developer.apple.com/forums/thread/776079?answerId=828469022#828469022 As for the rendering issue you are talking about, where the arms are occluding virtual content. Can you try using the upperLimbVisibility on your RealityView? Try setting it to hidden to see if that helps. .upperLimbVisibility(.automatic) //default .upperLimbVisibility(.hidden) .upperLimbVisibility(.visible)
Topic: Spatial Computing SubTopic: ARKit Tags:
Mar ’25
Reply to I am developing a Immersive Video App for VisionOs but I got a issue regarding app and video player window
Please see my answer on this post. The user was having some of the same issues https://developer.apple.com/forums/thread/777567 If you want to keep the main window, but play a video in another window (with a space active or not) you could use PushWindow. This will "replace" the main window while it is open, then bring it back when you close it. You can see this behavior in the Photos app from Apple. When you open a photo or video, the main window is hidden by the temporary push window. https://developer.apple.com/documentation/swiftui/environmentvalues/pushwindow
Topic: Spatial Computing SubTopic: General Tags:
Mar ’25
Reply to How to Move and Rotate WindowGroup with Code in Xcode
Unfortunately, that isn't possible in visionOS. We do not have programmatic access to window positions. Only the user can move or reposition windows and volumes. The closest we can get is setting initial position using defaultWindowPlacement. Many of us have already filed feedback requesting access to move windows and volumes. Hopefully we'll see some changes in visionOS 3. As a workaround, you could close your main window and use attachments when inside the immersive space. You can position these just like entities. However, this can come with more work. You'll need to decide how to move from the window to the space and back (scene phase can help here)
Mar ’25
Reply to Unable to Retain Main App Window State When Transitioning to Immersive Space
Unfortunately, there aren't too many great options for this in visionOS right now. I've approached this in two ways so far. Idea 1: manage the opacity / alpha of the main window. Apple did this in the Hello World. This is only suitable for quick trips into a space where the user won't be interacting with much. They may still see the window bar even though the window contents are hidden. Some tips to improve this option Use plain window style to remove the glass background, then add your own glass background when you want to show content try using .persistentSystemOverlays(.hidden) to hide the window bar when you're in the space Idea 2: keep track of window state and restore it when reopening the main window. You could keep track of navigation history, scroll position, view state, etc. This could be a great option and is less of a hack. But if your window has very complex views and hierarchy, this could be end up being very complex.
Topic: Spatial Computing SubTopic: General Tags:
Mar ’25
Reply to The occulution relationship of virtual content and real object
We can use OcclusionMaterial to solve issues like this. Essentially, we can use ARKit features to get meshes that describe the real world environment, the assign OcclusionMaterial to them. Apple has a neat example app you can try and a short tutorial that describes the process. Obscuring virtual items in a scene behind real-world items Bonus: You can also create your own material using ShaderGraph in Reality Composer Pro. There are two nodes that we can use. Occlusion surface Shadow receiving occlusion surface - use this one if your app needs to cast shadows or shine virtual lights on the environment.
Topic: Spatial Computing SubTopic: ARKit Tags:
Mar ’25
Reply to Difference between Head and Device tracking on visionOS
Apple has a neat sample project that shows have an entity follow based on head movements. It touches on the detail between the AnchorEntity and the DeviceAnchor. https://developer.apple.com/documentation/visionos/placing-entities-using-head-and-device-transform Hopefully, visionOS 3 will bring SpatialTrackingSession data to the head AnchorEntity position, just like we have with hand anchors now. (Feedback: FB16870381)
Topic: Spatial Computing SubTopic: ARKit Tags:
Mar ’25
Reply to App Window Closure Sequence Impacts Main Interface Reload Behavior
I had a similar issue in my app (Project Graveyard) which has a main volume and utility window to edit content. I solved this by using some shared state (AppModel) and ScenePhase. What I ended up with was the ability to reopen the main window from the utility window OR open the utility window from the main window. The first thing to keep in mind is that ScenePhase works differently when used at the app level (some Scene) vs. when using it I a view inside a window, volume, or space. visionOS has a lot of bugs (reported) about the app level uses. I was able to create by solution by using ScenePhase in my views and sharing some state in the AppModel. Here is a breakdown Add to AppModel var mainWindowOpen: Bool = true var yellowFlowerOpen: Bool = false The root view of the main window (ContentView in this case) @Environment(\.scenePhase) private var scenePhase Then listen for scenePhase using onChange, writing to to the mainWindowOpen bool from appModel. .onChange(of: scenePhase, initial: true) { switch scenePhase { case .inactive, .background: appModel.mainWindowOpen = false case .active: appModel.mainWindowOpen = true @unknown default: appModel.mainWindowOpen = false } } We do the same thing in the root view for the other window (or volume) @Environment(\.scenePhase) private var scenePhase Then listen to scene phase .onChange(of: scenePhase, initial: true) { switch scenePhase { case .inactive, .background: appModel.yellowFlowerOpen = false case .active: appModel.yellowFlowerOpen = true @unknown default: appModel.yellowFlowerOpen = false } } You can download this as an Xcode project if you want to try it our before trying to reproduce it. https://github.com/radicalappdev/Step-Into-Example-Projects/tree/main/Garden06 There is a also a video available on my website (I really wish we could upload short videos here) https://stepinto.vision/example-code/how-to-use-scene-phase-to-track-and-manage-window-state/
Topic: Spatial Computing SubTopic: General Tags:
Mar ’25
Reply to Vision OS: HUD mode windows
@DTS Engineer is correct that DeviceAnchor is the way to go. You asked about having Windows track head movement. That isn't possible, but you can get pretty close to the behavior you are looking for by providing an attachment with the SwiftUI view, then using DeviceAnchor to move the attachment entity. Bonus: Use Billboard to make the attachment face the user head-on Use move(to) with a slight delay to make this attachment entity smoothly move with the user. This will create a sort of "rubber band" effect that is a lot more pleasing then a hud that is fixed in place. Fixed huds can feel both laggy and claustrophobic.
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jun ’25
Reply to Issue: Closing Bounded Volume Never Re-Opens
I'm not working with Unity, but it sounds like a similar issue we have with native windows and volumes. visionOS will open the last opened "scene" † You may want to check with Unity support to see if they have bridged the APIs needed to manage scene state. If you have access to the Swift file that creates each window and volume, or the views at their root level, then you may be able to work with APIs like ScenePhase to monitor the status of scene scene type. I do this in one of my apps. The main content is in a volume, with a small utility window for editing. If the volumes gets closed, the utility window can notice the change and offer a means to reopen the volume. Read about scene phase: https://developer.apple.com/documentation/swiftui/scenephase Basic example of using scene phase to track the state of two windows: https://stepinto.vision/example-code/how-to-use-scene-phase-to-track-and-manage-window-state/ † scene in this case means Windows, Volumes, and Spaces, not 3D scene graphs
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jun ’25
Reply to Launching a Unity fully immersive game from SwiftUI
I'm not working with Unity, so take this with a grain of salt. You might look at the info.plist to make sure it can open immersive spaces. Xcode sets this automatically when creating projects from one of the visionOS templates, but if you or Unity created the project manually, this may need to be added. This is what it looks like in a very simple app that can support multiple windows and immersive spaces. More info: https://developer.apple.com/documentation/BundleResources/Information-Property-List/UIApplicationSceneManifest/UISceneConfigurations/UISceneSessionRoleImmersiveSpaceApplication
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jun ’25
Reply to visionos Create a window, set the window size, tilt angle, and position
A few tips for you. Placing windows We can set the default position of a new window. We can position them relative to other windows or use utility window. If you want to open a small window below the user, tilted up to face them, then you want .utility. Example WindowGroup(id: "Rocks") { Text("🪨🪨🪨") .font(.system(size: 48)) } .defaultSize(CGSize(width: 300, height: 100)) .defaultWindowPlacement { _, context in return WindowPlacement(.utilityPanel) } If you want to place window relative to another window, then you can do something like this // Main window WindowGroup { ContentView() } .defaultSize(width: 500, height: 500) WindowGroup(id: "YellowFlower") { Text("🌼") .font(.system(size: 128)) } .defaultSize(CGSize(width: 300, height: 200)) .defaultWindowPlacement { _, context in if let mainWindow = context.windows.first { return WindowPlacement(.leading(mainWindow)) } return WindowPlacement(.none) Unfortunately, as of visionOS 2, we can't move or position windows after they are already opened. The features above work once when we open a new window. Sizing windows You can control the default size like this WindowGroup(id: "YellowFlower") { YellowFlowerView() } .defaultSize(CGSize(width: 600, height: 600)) If you need more control, you can use .contentSize to make the window adapt to the size of the view/content. You can even adjust the size of the view and the window will adapt. WindowGroup(id: "YellowFlower") { YellowFlowerView() .frame(minWidth: 500, maxWidth: 700, minHeight: 500, maxHeight: 700) } .windowResizability(.contentSize) .defaultSize(CGSize(width: 600, height: 600)) Hope this helps!
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
May ’25
Reply to Collision Detection Fails After Anchoring ModelEntity to Hand in VisionOS
By default, AnchorEntity and entities with Anchoring Component exist in their own physics space, so they don't collide with other physics objects. We can opt out of this by setting physicsSimulation to .none let handAnchor = AnchorEntity(.hand(.left, location: .palm), trackingMode: .continuous) // Add this line handAnchor.anchoring.physicsSimulation = .none handAnchor.addChild(box) content.add(handAnchor) // Add the hand anchor to the scene.
Replies
Boosts
Views
Activity
May ’25
Reply to macOS SwiftData app never syncs with CloudKit
Fixed, in case anyone else runs into this: Apparently, macOS builds don't include CloudKit by default, at least when starting from the multi-platform template project. Here is an article that explains this a bit more. https://fatbobman.com/en/snippet/fix-synchronization-issues-for-macos-apps-using-core-dataswiftdata/
Replies
Boosts
Views
Activity
May ’25
Reply to Launching a timeline on a specific model via notification
Boosting this because I would love to know too. When create timelines, all the actions in the timeline specify the entity they work on. I don't know if there is a way to do this, but I hope there is. In the meantime, if you want to do these actions in code without the Timelines, we can use Entity Actions. This lets us create an action and call it on an entity. The has been my go-to workaround for now. It can still take a bit of work to chain multiple actions together.
Replies
Boosts
Views
Activity
May ’25
Reply to ARKit Planes do not appear where expected on visionOS
I think figured out what I was doing wrong. I was using the extents of the anchor to create meshes, then placing the meshes at the transform for the anchor. I was expecting an anchor plane to be something I could turn into a geometric plane. Diving deeper into it, anchors are not planes the sense that a plane mesh is. These anchors are actually n-gons that don't necessarily line up with shape of a plane. Apple has an example project that creates these, but applies an occlusion material. I swapped that for a material with random colors so I could visualize what is happening. Each anchor has a n-gon, represented with meshVertices. The example project used some extensions to use that data to create shapes for the meshes. Personally, I found the example project difficult to understand. It has way too much abstraction and stashes the good stuff in extensions. Here is a modified version of the example from the first post, without the abstractions from the Apple example project. I'd love to hear any more ideas from you folks. How can we improve this? import SwiftUI import RealityKit import RealityKitContent import ARKit struct Example068: View { @State var session = ARKitSession() @State private var planeAnchors: [UUID: Entity] = [:] @State private var planeColors: [UUID: Color] = [:] var body: some View { RealityView { content in } update: { content in for (_, entity) in planeAnchors { if !content.entities.contains(entity) { content.add(entity) } } } .task { try! await setupAndRunPlaneDetection() } } func setupAndRunPlaneDetection() async throws { let planeData = PlaneDetectionProvider(alignments: [.horizontal, .vertical, .slanted]) if PlaneDetectionProvider.isSupported { do { try await session.run([planeData]) for await update in planeData.anchorUpdates { switch update.event { case .added, .updated: let anchor = update.anchor if planeColors[anchor.id] == nil { planeColors[anchor.id] = generatePastelColor() } let planeEntity = createPlaneEntity(for: anchor, color: planeColors[anchor.id]!) planeAnchors[anchor.id] = planeEntity case .removed: let anchor = update.anchor planeAnchors.removeValue(forKey: anchor.id) planeColors.removeValue(forKey: anchor.id) } } } catch { print("ARKit session error \(error)") } } } private func generatePastelColor() -> Color { let hue = Double.random(in: 0...1) let saturation = Double.random(in: 0.2...0.4) let brightness = Double.random(in: 0.8...1.0) return Color(hue: hue, saturation: saturation, brightness: brightness) } private func createPlaneEntity(for anchor: PlaneAnchor, color: Color) -> Entity { let entity = Entity() entity.name = "Plane \(anchor.id)" entity.setTransformMatrix(anchor.originFromAnchorTransform, relativeTo: nil) // Generate a mesh for the plane (for occlusion). var meshResource: MeshResource? = nil do { var contents = MeshResource.Contents() contents.instances = [MeshResource.Instance(id: "main", model: "model")] var part = MeshResource.Part(id: "part", materialIndex: 0) // Convert vertices to SIMD3<Float> let vertices = anchor.geometry.meshVertices var vertexArray: [SIMD3<Float>] = [] for i in 0..<vertices.count { let vertex = vertices.buffer.contents().advanced(by: vertices.offset + vertices.stride * i).assumingMemoryBound(to: (Float, Float, Float).self).pointee vertexArray.append(SIMD3<Float>(vertex.0, vertex.1, vertex.2)) } part.positions = MeshBuffers.Positions(vertexArray) // Convert faces to UInt32 let faces = anchor.geometry.meshFaces var faceArray: [UInt32] = [] let totalFaces = faces.count * faces.primitive.indexCount for i in 0..<totalFaces { let face = faces.buffer.contents().advanced(by: i * MemoryLayout<Int32>.size).assumingMemoryBound(to: Int32.self).pointee faceArray.append(UInt32(face)) } part.triangleIndices = MeshBuffer(faceArray) contents.models = [MeshResource.Model(id: "model", parts: [part])] meshResource = try MeshResource.generate(from: contents) } catch { print("Failed to create a mesh resource for a plane anchor: \(error).") } var material = PhysicallyBasedMaterial() material.baseColor.tint = UIColor(color) if let meshResource { entity.components.set(ModelComponent(mesh: meshResource, materials: [material])) } return entity } } #Preview { Example068() }
Replies
Boosts
Views
Activity
Apr ’25
Reply to [VisionPro] Placing virtual entities around my arm
Hi, we can use ARKit hand anchors to add items to users hands. There are a ton of anchors for each finger, palm, wrist, etc. But I don't think we get access to arms other than hands. If you need more information on hands, check out these two posts on the forum https://developer.apple.com/forums/thread/774522?answerId=825213022#825213022 https://developer.apple.com/forums/thread/776079?answerId=828469022#828469022 As for the rendering issue you are talking about, where the arms are occluding virtual content. Can you try using the upperLimbVisibility on your RealityView? Try setting it to hidden to see if that helps. .upperLimbVisibility(.automatic) //default .upperLimbVisibility(.hidden) .upperLimbVisibility(.visible)
Topic: Spatial Computing SubTopic: ARKit Tags:
Replies
Boosts
Views
Activity
Mar ’25
Reply to I am developing a Immersive Video App for VisionOs but I got a issue regarding app and video player window
Please see my answer on this post. The user was having some of the same issues https://developer.apple.com/forums/thread/777567 If you want to keep the main window, but play a video in another window (with a space active or not) you could use PushWindow. This will "replace" the main window while it is open, then bring it back when you close it. You can see this behavior in the Photos app from Apple. When you open a photo or video, the main window is hidden by the temporary push window. https://developer.apple.com/documentation/swiftui/environmentvalues/pushwindow
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Mar ’25
Reply to How to Move and Rotate WindowGroup with Code in Xcode
Unfortunately, that isn't possible in visionOS. We do not have programmatic access to window positions. Only the user can move or reposition windows and volumes. The closest we can get is setting initial position using defaultWindowPlacement. Many of us have already filed feedback requesting access to move windows and volumes. Hopefully we'll see some changes in visionOS 3. As a workaround, you could close your main window and use attachments when inside the immersive space. You can position these just like entities. However, this can come with more work. You'll need to decide how to move from the window to the space and back (scene phase can help here)
Replies
Boosts
Views
Activity
Mar ’25
Reply to Unable to Retain Main App Window State When Transitioning to Immersive Space
Unfortunately, there aren't too many great options for this in visionOS right now. I've approached this in two ways so far. Idea 1: manage the opacity / alpha of the main window. Apple did this in the Hello World. This is only suitable for quick trips into a space where the user won't be interacting with much. They may still see the window bar even though the window contents are hidden. Some tips to improve this option Use plain window style to remove the glass background, then add your own glass background when you want to show content try using .persistentSystemOverlays(.hidden) to hide the window bar when you're in the space Idea 2: keep track of window state and restore it when reopening the main window. You could keep track of navigation history, scroll position, view state, etc. This could be a great option and is less of a hack. But if your window has very complex views and hierarchy, this could be end up being very complex.
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Mar ’25
Reply to The occulution relationship of virtual content and real object
We can use OcclusionMaterial to solve issues like this. Essentially, we can use ARKit features to get meshes that describe the real world environment, the assign OcclusionMaterial to them. Apple has a neat example app you can try and a short tutorial that describes the process. Obscuring virtual items in a scene behind real-world items Bonus: You can also create your own material using ShaderGraph in Reality Composer Pro. There are two nodes that we can use. Occlusion surface Shadow receiving occlusion surface - use this one if your app needs to cast shadows or shine virtual lights on the environment.
Topic: Spatial Computing SubTopic: ARKit Tags:
Replies
Boosts
Views
Activity
Mar ’25
Reply to Difference between Head and Device tracking on visionOS
Apple has a neat sample project that shows have an entity follow based on head movements. It touches on the detail between the AnchorEntity and the DeviceAnchor. https://developer.apple.com/documentation/visionos/placing-entities-using-head-and-device-transform Hopefully, visionOS 3 will bring SpatialTrackingSession data to the head AnchorEntity position, just like we have with hand anchors now. (Feedback: FB16870381)
Topic: Spatial Computing SubTopic: ARKit Tags:
Replies
Boosts
Views
Activity
Mar ’25
Reply to App Window Closure Sequence Impacts Main Interface Reload Behavior
I had a similar issue in my app (Project Graveyard) which has a main volume and utility window to edit content. I solved this by using some shared state (AppModel) and ScenePhase. What I ended up with was the ability to reopen the main window from the utility window OR open the utility window from the main window. The first thing to keep in mind is that ScenePhase works differently when used at the app level (some Scene) vs. when using it I a view inside a window, volume, or space. visionOS has a lot of bugs (reported) about the app level uses. I was able to create by solution by using ScenePhase in my views and sharing some state in the AppModel. Here is a breakdown Add to AppModel var mainWindowOpen: Bool = true var yellowFlowerOpen: Bool = false The root view of the main window (ContentView in this case) @Environment(\.scenePhase) private var scenePhase Then listen for scenePhase using onChange, writing to to the mainWindowOpen bool from appModel. .onChange(of: scenePhase, initial: true) { switch scenePhase { case .inactive, .background: appModel.mainWindowOpen = false case .active: appModel.mainWindowOpen = true @unknown default: appModel.mainWindowOpen = false } } We do the same thing in the root view for the other window (or volume) @Environment(\.scenePhase) private var scenePhase Then listen to scene phase .onChange(of: scenePhase, initial: true) { switch scenePhase { case .inactive, .background: appModel.yellowFlowerOpen = false case .active: appModel.yellowFlowerOpen = true @unknown default: appModel.yellowFlowerOpen = false } } You can download this as an Xcode project if you want to try it our before trying to reproduce it. https://github.com/radicalappdev/Step-Into-Example-Projects/tree/main/Garden06 There is a also a video available on my website (I really wish we could upload short videos here) https://stepinto.vision/example-code/how-to-use-scene-phase-to-track-and-manage-window-state/
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Mar ’25