Post

Replies

Boosts

Views

Activity

Reply to How to Move and Rotate WindowGroup with Code in Xcode
Unfortunately, that isn't possible in visionOS. We do not have programmatic access to window positions. Only the user can move or reposition windows and volumes. The closest we can get is setting initial position using defaultWindowPlacement. Many of us have already filed feedback requesting access to move windows and volumes. Hopefully we'll see some changes in visionOS 3. As a workaround, you could close your main window and use attachments when inside the immersive space. You can position these just like entities. However, this can come with more work. You'll need to decide how to move from the window to the space and back (scene phase can help here)
Mar ’25
Reply to I am developing a Immersive Video App for VisionOs but I got a issue regarding app and video player window
Please see my answer on this post. The user was having some of the same issues https://developer.apple.com/forums/thread/777567 If you want to keep the main window, but play a video in another window (with a space active or not) you could use PushWindow. This will "replace" the main window while it is open, then bring it back when you close it. You can see this behavior in the Photos app from Apple. When you open a photo or video, the main window is hidden by the temporary push window. https://developer.apple.com/documentation/swiftui/environmentvalues/pushwindow
Topic: Spatial Computing SubTopic: General Tags:
Mar ’25
Reply to [VisionPro] Placing virtual entities around my arm
Hi, we can use ARKit hand anchors to add items to users hands. There are a ton of anchors for each finger, palm, wrist, etc. But I don't think we get access to arms other than hands. If you need more information on hands, check out these two posts on the forum https://developer.apple.com/forums/thread/774522?answerId=825213022#825213022 https://developer.apple.com/forums/thread/776079?answerId=828469022#828469022 As for the rendering issue you are talking about, where the arms are occluding virtual content. Can you try using the upperLimbVisibility on your RealityView? Try setting it to hidden to see if that helps. .upperLimbVisibility(.automatic) //default .upperLimbVisibility(.hidden) .upperLimbVisibility(.visible)
Topic: Spatial Computing SubTopic: ARKit Tags:
Mar ’25
Reply to ARKit Planes do not appear where expected on visionOS
I think figured out what I was doing wrong. I was using the extents of the anchor to create meshes, then placing the meshes at the transform for the anchor. I was expecting an anchor plane to be something I could turn into a geometric plane. Diving deeper into it, anchors are not planes the sense that a plane mesh is. These anchors are actually n-gons that don't necessarily line up with shape of a plane. Apple has an example project that creates these, but applies an occlusion material. I swapped that for a material with random colors so I could visualize what is happening. Each anchor has a n-gon, represented with meshVertices. The example project used some extensions to use that data to create shapes for the meshes. Personally, I found the example project difficult to understand. It has way too much abstraction and stashes the good stuff in extensions. Here is a modified version of the example from the first post, without the abstractions from the Apple example project. I'd love to hear any more ideas from you folks. How can we improve this? import SwiftUI import RealityKit import RealityKitContent import ARKit struct Example068: View { @State var session = ARKitSession() @State private var planeAnchors: [UUID: Entity] = [:] @State private var planeColors: [UUID: Color] = [:] var body: some View { RealityView { content in } update: { content in for (_, entity) in planeAnchors { if !content.entities.contains(entity) { content.add(entity) } } } .task { try! await setupAndRunPlaneDetection() } } func setupAndRunPlaneDetection() async throws { let planeData = PlaneDetectionProvider(alignments: [.horizontal, .vertical, .slanted]) if PlaneDetectionProvider.isSupported { do { try await session.run([planeData]) for await update in planeData.anchorUpdates { switch update.event { case .added, .updated: let anchor = update.anchor if planeColors[anchor.id] == nil { planeColors[anchor.id] = generatePastelColor() } let planeEntity = createPlaneEntity(for: anchor, color: planeColors[anchor.id]!) planeAnchors[anchor.id] = planeEntity case .removed: let anchor = update.anchor planeAnchors.removeValue(forKey: anchor.id) planeColors.removeValue(forKey: anchor.id) } } } catch { print("ARKit session error \(error)") } } } private func generatePastelColor() -> Color { let hue = Double.random(in: 0...1) let saturation = Double.random(in: 0.2...0.4) let brightness = Double.random(in: 0.8...1.0) return Color(hue: hue, saturation: saturation, brightness: brightness) } private func createPlaneEntity(for anchor: PlaneAnchor, color: Color) -> Entity { let entity = Entity() entity.name = "Plane \(anchor.id)" entity.setTransformMatrix(anchor.originFromAnchorTransform, relativeTo: nil) // Generate a mesh for the plane (for occlusion). var meshResource: MeshResource? = nil do { var contents = MeshResource.Contents() contents.instances = [MeshResource.Instance(id: "main", model: "model")] var part = MeshResource.Part(id: "part", materialIndex: 0) // Convert vertices to SIMD3<Float> let vertices = anchor.geometry.meshVertices var vertexArray: [SIMD3<Float>] = [] for i in 0..<vertices.count { let vertex = vertices.buffer.contents().advanced(by: vertices.offset + vertices.stride * i).assumingMemoryBound(to: (Float, Float, Float).self).pointee vertexArray.append(SIMD3<Float>(vertex.0, vertex.1, vertex.2)) } part.positions = MeshBuffers.Positions(vertexArray) // Convert faces to UInt32 let faces = anchor.geometry.meshFaces var faceArray: [UInt32] = [] let totalFaces = faces.count * faces.primitive.indexCount for i in 0..<totalFaces { let face = faces.buffer.contents().advanced(by: i * MemoryLayout<Int32>.size).assumingMemoryBound(to: Int32.self).pointee faceArray.append(UInt32(face)) } part.triangleIndices = MeshBuffer(faceArray) contents.models = [MeshResource.Model(id: "model", parts: [part])] meshResource = try MeshResource.generate(from: contents) } catch { print("Failed to create a mesh resource for a plane anchor: \(error).") } var material = PhysicallyBasedMaterial() material.baseColor.tint = UIColor(color) if let meshResource { entity.components.set(ModelComponent(mesh: meshResource, materials: [material])) } return entity } } #Preview { Example068() }
Apr ’25
Reply to Launching a timeline on a specific model via notification
Boosting this because I would love to know too. When create timelines, all the actions in the timeline specify the entity they work on. I don't know if there is a way to do this, but I hope there is. In the meantime, if you want to do these actions in code without the Timelines, we can use Entity Actions. This lets us create an action and call it on an entity. The has been my go-to workaround for now. It can still take a bit of work to chain multiple actions together.
May ’25
Reply to Collision Detection Fails After Anchoring ModelEntity to Hand in VisionOS
By default, AnchorEntity and entities with Anchoring Component exist in their own physics space, so they don't collide with other physics objects. We can opt out of this by setting physicsSimulation to .none let handAnchor = AnchorEntity(.hand(.left, location: .palm), trackingMode: .continuous) // Add this line handAnchor.anchoring.physicsSimulation = .none handAnchor.addChild(box) content.add(handAnchor) // Add the hand anchor to the scene.
May ’25
Reply to visionos Create a window, set the window size, tilt angle, and position
A few tips for you. Placing windows We can set the default position of a new window. We can position them relative to other windows or use utility window. If you want to open a small window below the user, tilted up to face them, then you want .utility. Example WindowGroup(id: "Rocks") { Text("🪨🪨🪨") .font(.system(size: 48)) } .defaultSize(CGSize(width: 300, height: 100)) .defaultWindowPlacement { _, context in return WindowPlacement(.utilityPanel) } If you want to place window relative to another window, then you can do something like this // Main window WindowGroup { ContentView() } .defaultSize(width: 500, height: 500) WindowGroup(id: "YellowFlower") { Text("🌼") .font(.system(size: 128)) } .defaultSize(CGSize(width: 300, height: 200)) .defaultWindowPlacement { _, context in if let mainWindow = context.windows.first { return WindowPlacement(.leading(mainWindow)) } return WindowPlacement(.none) Unfortunately, as of visionOS 2, we can't move or position windows after they are already opened. The features above work once when we open a new window. Sizing windows You can control the default size like this WindowGroup(id: "YellowFlower") { YellowFlowerView() } .defaultSize(CGSize(width: 600, height: 600)) If you need more control, you can use .contentSize to make the window adapt to the size of the view/content. You can even adjust the size of the view and the window will adapt. WindowGroup(id: "YellowFlower") { YellowFlowerView() .frame(minWidth: 500, maxWidth: 700, minHeight: 500, maxHeight: 700) } .windowResizability(.contentSize) .defaultSize(CGSize(width: 600, height: 600)) Hope this helps!
Topic: Spatial Computing SubTopic: General Tags:
May ’25
Reply to Launching a Unity fully immersive game from SwiftUI
I'm not working with Unity, so take this with a grain of salt. You might look at the info.plist to make sure it can open immersive spaces. Xcode sets this automatically when creating projects from one of the visionOS templates, but if you or Unity created the project manually, this may need to be added. This is what it looks like in a very simple app that can support multiple windows and immersive spaces. More info: https://developer.apple.com/documentation/BundleResources/Information-Property-List/UIApplicationSceneManifest/UISceneConfigurations/UISceneSessionRoleImmersiveSpaceApplication
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25
Reply to Issue: Closing Bounded Volume Never Re-Opens
I'm not working with Unity, but it sounds like a similar issue we have with native windows and volumes. visionOS will open the last opened "scene" † You may want to check with Unity support to see if they have bridged the APIs needed to manage scene state. If you have access to the Swift file that creates each window and volume, or the views at their root level, then you may be able to work with APIs like ScenePhase to monitor the status of scene scene type. I do this in one of my apps. The main content is in a volume, with a small utility window for editing. If the volumes gets closed, the utility window can notice the change and offer a means to reopen the volume. Read about scene phase: https://developer.apple.com/documentation/swiftui/scenephase Basic example of using scene phase to track the state of two windows: https://stepinto.vision/example-code/how-to-use-scene-phase-to-track-and-manage-window-state/ † scene in this case means Windows, Volumes, and Spaces, not 3D scene graphs
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25
Reply to Vision OS: HUD mode windows
@DTS Engineer is correct that DeviceAnchor is the way to go. You asked about having Windows track head movement. That isn't possible, but you can get pretty close to the behavior you are looking for by providing an attachment with the SwiftUI view, then using DeviceAnchor to move the attachment entity. Bonus: Use Billboard to make the attachment face the user head-on Use move(to) with a slight delay to make this attachment entity smoothly move with the user. This will create a sort of "rubber band" effect that is a lot more pleasing then a hud that is fixed in place. Fixed huds can feel both laggy and claustrophobic.
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25
Reply to Alternatives to SceneView
RealityKit and RealityView are my suggestions. I think you were on the right track with gestures + RealityView: I tried to add Gestures to the RealityView on iOS - loading USDZ 3D models worked but the gestures didn’t). We can use SwiftUI gestures with RealityKit entities. Things like TapGesture, DragGesture, etc. There is a bit of work needed to make these work with RealityKit. Load you model in a RealityView as an entity Add components to the entity: InputTargetComponent and CollisionComponent are both required to use system gestures with entities. The gesture code needs to target entities Example using targetedToAnyEntity. var tapExample: some Gesture { TapGesture() .targetedToAnyEntity() // 3. make sure to use this line to target entities .onEnded { value in if selected === value.entity { // If the same entity is tapped, lower it and deselect. selected?.position.y = 0 selected = nil } else { // Lower the previously selected entity (if any). selected?.position.y = 0 // Raise the new entity and select it. value.entity.position.y = 0.1 selected = value.entity } } }
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25
Reply to Can we access a "Locked in Place" value when a window has been locked without being snapped to a surface?
@Vision Pro Engineer "Locking in a window in place shouldn't result in visual changes within your application" "so locking-in-place should not be special-cased" These are just your opinions though, and I disagree. I have valid uses where this would make sense. I described one of them in my earlier comment. I have an app where a certain type of window has what call "focus mode". With visionOS 26 window locking, I would like to enable focus mode for any locked window. Currently the user would have to lock the window, then use controls inside the window to enable focus mode. It would be better if I could offer them the option to enable focus mode when the window is locked. I don't understand why you're pushing back on this. It's a simple feature request for a value that we can already get for snapped windows. Why do you think we shouldn't be able to access the same value for unsnapped free floating windows?
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25
Reply to Can we access a "Locked in Place" value when a window has been locked without being snapped to a surface?
@Vision Pro Engineer I'm not talking about interrupting or overriding scene restoration. I'm talking about adapting the window that the user is currently using with other visionOS and SwiftUI features. It terms of this request, I'd still like to understand why this would be better. I could imagine use cases in which someone might prefer to have your application locked in place, but not in the focus mode. This is about user choice. I want to give them the option. I'm not talking about limiting focus mode to locked windows only. But for a lot of uses, it would be a better user experience to auto-enable focus mode for locked windows. Without this the user has to (1) lock the window manually enable focus mode). Giving them the option to do both of these just by performing one action is better than always requiring them to do both. I can already deliver the experience that I want with the existing APIs for snapped windows. I want to provide the same experience for free, floating window. Regardless of intention, users will think about locked windows in exactly the same way they will think about snapped windows. The only difference between these two is one is bound to a surface and the other is free floating. Other than that, they should behave exactly the same. Without access to this value, they will always have to be treated differently.
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25