Post

Replies

Boosts

Views

Activity

Reply to Launching a timeline on a specific model via notification
Boosting this because I would love to know too. When create timelines, all the actions in the timeline specify the entity they work on. I don't know if there is a way to do this, but I hope there is. In the meantime, if you want to do these actions in code without the Timelines, we can use Entity Actions. This lets us create an action and call it on an entity. The has been my go-to workaround for now. It can still take a bit of work to chain multiple actions together.
May ’25
Reply to ARKit Planes do not appear where expected on visionOS
I think figured out what I was doing wrong. I was using the extents of the anchor to create meshes, then placing the meshes at the transform for the anchor. I was expecting an anchor plane to be something I could turn into a geometric plane. Diving deeper into it, anchors are not planes the sense that a plane mesh is. These anchors are actually n-gons that don't necessarily line up with shape of a plane. Apple has an example project that creates these, but applies an occlusion material. I swapped that for a material with random colors so I could visualize what is happening. Each anchor has a n-gon, represented with meshVertices. The example project used some extensions to use that data to create shapes for the meshes. Personally, I found the example project difficult to understand. It has way too much abstraction and stashes the good stuff in extensions. Here is a modified version of the example from the first post, without the abstractions from the Apple example project. I'd love to hear any more ideas from you folks. How can we improve this? import SwiftUI import RealityKit import RealityKitContent import ARKit struct Example068: View { @State var session = ARKitSession() @State private var planeAnchors: [UUID: Entity] = [:] @State private var planeColors: [UUID: Color] = [:] var body: some View { RealityView { content in } update: { content in for (_, entity) in planeAnchors { if !content.entities.contains(entity) { content.add(entity) } } } .task { try! await setupAndRunPlaneDetection() } } func setupAndRunPlaneDetection() async throws { let planeData = PlaneDetectionProvider(alignments: [.horizontal, .vertical, .slanted]) if PlaneDetectionProvider.isSupported { do { try await session.run([planeData]) for await update in planeData.anchorUpdates { switch update.event { case .added, .updated: let anchor = update.anchor if planeColors[anchor.id] == nil { planeColors[anchor.id] = generatePastelColor() } let planeEntity = createPlaneEntity(for: anchor, color: planeColors[anchor.id]!) planeAnchors[anchor.id] = planeEntity case .removed: let anchor = update.anchor planeAnchors.removeValue(forKey: anchor.id) planeColors.removeValue(forKey: anchor.id) } } } catch { print("ARKit session error \(error)") } } } private func generatePastelColor() -> Color { let hue = Double.random(in: 0...1) let saturation = Double.random(in: 0.2...0.4) let brightness = Double.random(in: 0.8...1.0) return Color(hue: hue, saturation: saturation, brightness: brightness) } private func createPlaneEntity(for anchor: PlaneAnchor, color: Color) -> Entity { let entity = Entity() entity.name = "Plane \(anchor.id)" entity.setTransformMatrix(anchor.originFromAnchorTransform, relativeTo: nil) // Generate a mesh for the plane (for occlusion). var meshResource: MeshResource? = nil do { var contents = MeshResource.Contents() contents.instances = [MeshResource.Instance(id: "main", model: "model")] var part = MeshResource.Part(id: "part", materialIndex: 0) // Convert vertices to SIMD3<Float> let vertices = anchor.geometry.meshVertices var vertexArray: [SIMD3<Float>] = [] for i in 0..<vertices.count { let vertex = vertices.buffer.contents().advanced(by: vertices.offset + vertices.stride * i).assumingMemoryBound(to: (Float, Float, Float).self).pointee vertexArray.append(SIMD3<Float>(vertex.0, vertex.1, vertex.2)) } part.positions = MeshBuffers.Positions(vertexArray) // Convert faces to UInt32 let faces = anchor.geometry.meshFaces var faceArray: [UInt32] = [] let totalFaces = faces.count * faces.primitive.indexCount for i in 0..<totalFaces { let face = faces.buffer.contents().advanced(by: i * MemoryLayout<Int32>.size).assumingMemoryBound(to: Int32.self).pointee faceArray.append(UInt32(face)) } part.triangleIndices = MeshBuffer(faceArray) contents.models = [MeshResource.Model(id: "model", parts: [part])] meshResource = try MeshResource.generate(from: contents) } catch { print("Failed to create a mesh resource for a plane anchor: \(error).") } var material = PhysicallyBasedMaterial() material.baseColor.tint = UIColor(color) if let meshResource { entity.components.set(ModelComponent(mesh: meshResource, materials: [material])) } return entity } } #Preview { Example068() }
Apr ’25
Reply to [VisionPro] Placing virtual entities around my arm
Hi, we can use ARKit hand anchors to add items to users hands. There are a ton of anchors for each finger, palm, wrist, etc. But I don't think we get access to arms other than hands. If you need more information on hands, check out these two posts on the forum https://developer.apple.com/forums/thread/774522?answerId=825213022#825213022 https://developer.apple.com/forums/thread/776079?answerId=828469022#828469022 As for the rendering issue you are talking about, where the arms are occluding virtual content. Can you try using the upperLimbVisibility on your RealityView? Try setting it to hidden to see if that helps. .upperLimbVisibility(.automatic) //default .upperLimbVisibility(.hidden) .upperLimbVisibility(.visible)
Topic: Spatial Computing SubTopic: ARKit Tags:
Mar ’25
Reply to I am developing a Immersive Video App for VisionOs but I got a issue regarding app and video player window
Please see my answer on this post. The user was having some of the same issues https://developer.apple.com/forums/thread/777567 If you want to keep the main window, but play a video in another window (with a space active or not) you could use PushWindow. This will "replace" the main window while it is open, then bring it back when you close it. You can see this behavior in the Photos app from Apple. When you open a photo or video, the main window is hidden by the temporary push window. https://developer.apple.com/documentation/swiftui/environmentvalues/pushwindow
Topic: Spatial Computing SubTopic: General Tags:
Mar ’25
Reply to How to Move and Rotate WindowGroup with Code in Xcode
Unfortunately, that isn't possible in visionOS. We do not have programmatic access to window positions. Only the user can move or reposition windows and volumes. The closest we can get is setting initial position using defaultWindowPlacement. Many of us have already filed feedback requesting access to move windows and volumes. Hopefully we'll see some changes in visionOS 3. As a workaround, you could close your main window and use attachments when inside the immersive space. You can position these just like entities. However, this can come with more work. You'll need to decide how to move from the window to the space and back (scene phase can help here)
Mar ’25
Reply to Unable to Retain Main App Window State When Transitioning to Immersive Space
Unfortunately, there aren't too many great options for this in visionOS right now. I've approached this in two ways so far. Idea 1: manage the opacity / alpha of the main window. Apple did this in the Hello World. This is only suitable for quick trips into a space where the user won't be interacting with much. They may still see the window bar even though the window contents are hidden. Some tips to improve this option Use plain window style to remove the glass background, then add your own glass background when you want to show content try using .persistentSystemOverlays(.hidden) to hide the window bar when you're in the space Idea 2: keep track of window state and restore it when reopening the main window. You could keep track of navigation history, scroll position, view state, etc. This could be a great option and is less of a hack. But if your window has very complex views and hierarchy, this could be end up being very complex.
Topic: Spatial Computing SubTopic: General Tags:
Mar ’25
Reply to The occulution relationship of virtual content and real object
We can use OcclusionMaterial to solve issues like this. Essentially, we can use ARKit features to get meshes that describe the real world environment, the assign OcclusionMaterial to them. Apple has a neat example app you can try and a short tutorial that describes the process. Obscuring virtual items in a scene behind real-world items Bonus: You can also create your own material using ShaderGraph in Reality Composer Pro. There are two nodes that we can use. Occlusion surface Shadow receiving occlusion surface - use this one if your app needs to cast shadows or shine virtual lights on the environment.
Topic: Spatial Computing SubTopic: ARKit Tags:
Mar ’25
Reply to Difference between Head and Device tracking on visionOS
Apple has a neat sample project that shows have an entity follow based on head movements. It touches on the detail between the AnchorEntity and the DeviceAnchor. https://developer.apple.com/documentation/visionos/placing-entities-using-head-and-device-transform Hopefully, visionOS 3 will bring SpatialTrackingSession data to the head AnchorEntity position, just like we have with hand anchors now. (Feedback: FB16870381)
Topic: Spatial Computing SubTopic: ARKit Tags:
Mar ’25
Reply to App Window Closure Sequence Impacts Main Interface Reload Behavior
I had a similar issue in my app (Project Graveyard) which has a main volume and utility window to edit content. I solved this by using some shared state (AppModel) and ScenePhase. What I ended up with was the ability to reopen the main window from the utility window OR open the utility window from the main window. The first thing to keep in mind is that ScenePhase works differently when used at the app level (some Scene) vs. when using it I a view inside a window, volume, or space. visionOS has a lot of bugs (reported) about the app level uses. I was able to create by solution by using ScenePhase in my views and sharing some state in the AppModel. Here is a breakdown Add to AppModel var mainWindowOpen: Bool = true var yellowFlowerOpen: Bool = false The root view of the main window (ContentView in this case) @Environment(\.scenePhase) private var scenePhase Then listen for scenePhase using onChange, writing to to the mainWindowOpen bool from appModel. .onChange(of: scenePhase, initial: true) { switch scenePhase { case .inactive, .background: appModel.mainWindowOpen = false case .active: appModel.mainWindowOpen = true @unknown default: appModel.mainWindowOpen = false } } We do the same thing in the root view for the other window (or volume) @Environment(\.scenePhase) private var scenePhase Then listen to scene phase .onChange(of: scenePhase, initial: true) { switch scenePhase { case .inactive, .background: appModel.yellowFlowerOpen = false case .active: appModel.yellowFlowerOpen = true @unknown default: appModel.yellowFlowerOpen = false } } You can download this as an Xcode project if you want to try it our before trying to reproduce it. https://github.com/radicalappdev/Step-Into-Example-Projects/tree/main/Garden06 There is a also a video available on my website (I really wish we could upload short videos here) https://stepinto.vision/example-code/how-to-use-scene-phase-to-track-and-manage-window-state/
Topic: Spatial Computing SubTopic: General Tags:
Mar ’25
Reply to ARKit hand tracking
Hands on visionOS are represented as a series of anchors, and there are a lot of them! Each anchor has a transform. These anchors are located around the hand at positions like palm, wrist, index finger tip, etc. There are two main ways we can work with these anchors. Option 1: Anchoring Component (or AnchorEntity) + Spatial Tracking Session This is a simple way to start working with the transforms for one or more AnchorEntities. Start a Spatial Tracking Session. This will enable access to the transform of our anchors. let configuration = SpatialTrackingSession.Configuration( tracking: [.hand]) let session = SpatialTrackingSession() await session.run(configuration) Add an anchor. This example adds an anchor to the left hand index finger. if let leftHandSphere = scene.findEntity(named: "LeftHand") { let leftHand = AnchorEntity(.hand(.left, location: .indexFingerTip)) leftHand.addChild(leftHandSphere) content.add(leftHand) } Access the transform let leftIndexTransform = Transform(matrix: anchor.transformMatrix(relativeTo: nil)) Important: if you are using collisions or physics, then you will also want to disable the default physics simulation for the anchor. Without this, the hand anchor won't be able to collide with other entities in the scene. leftIndexAnchor.anchoring.physicsSimulation = .none Option 2: Use ARKit directly This is a bit more involved, but gives is more control. This example uses an ARKitSession and adds a sphere to each finger tip. struct Example017: View { let arSession = ARKitSession() let handTrackingProvider = HandTrackingProvider() let leftCollection = Entity() let rightCollection = Entity() let tipJoints: [HandSkeleton.JointName] = [ .thumbTip, .indexFingerTip, .middleFingerTip, .ringFingerTip, .littleFingerTip ] var body: some View { RealityView { content in content.add(leftCollection) content.add(rightCollection) if let scene = try? await Entity(named: "HandTrackingLabs", in: realityKitContentBundle) { content.add(scene) if let leftHandSphere = scene.findEntity(named: "StepSphereBlue") { // Create clones of the left hand sphere for each joint for jointName in tipJoints { let sphere = leftHandSphere.clone(recursive: true) sphere.name = jointName.description leftCollection.addChild(sphere) } leftHandSphere.isEnabled = false } if let rightHandSphere = scene.findEntity(named: "StepSphereGreen") { // Create clones of the right hand sphere for each joint for jointName in tipJoints { let sphere = rightHandSphere.clone(recursive: true) sphere.name = jointName.description rightCollection.addChild(sphere) } rightHandSphere.isEnabled = false } } } .persistentSystemOverlays(.hidden) .task { try! await arSession.run([handTrackingProvider]) } // Left Hand: Receive updates from the provider and process them over time .task { for await update in handTrackingProvider.anchorUpdates where update.anchor.chirality == .left { let handAnchor = update.anchor for jointName in tipJoints { if let joint = handAnchor.handSkeleton?.joint(jointName), let sphere = leftCollection.findEntity(named: jointName.description) { let transform = handAnchor.originFromAnchorTransform let jointTransform = joint.anchorFromJointTransform sphere.setTransformMatrix(transform * jointTransform, relativeTo: nil) } } } } // Right Hand: Receive updates from the provider and process them over time .task { for await update in handTrackingProvider.anchorUpdates where update.anchor.chirality == .right { let handAnchor = update.anchor for jointName in tipJoints { if let joint = handAnchor.handSkeleton?.joint(jointName), let sphere = rightCollection.findEntity(named: jointName.description) { let transform = handAnchor.originFromAnchorTransform let jointTransform = joint.anchorFromJointTransform sphere.setTransformMatrix(transform * jointTransform, relativeTo: nil) } } } } } } Resources AnchorEntity https://developer.apple.com/documentation/realitykit/anchorentity SpatialTrackingSession https://developer.apple.com/documentation/RealityKit/SpatialTrackingSession ARKit Hand Tracking from Apple https://developer.apple.com/documentation/visionos/tracking-and-visualizing-hand-movement I have several examples using AnchorEntity and Spatial Tracking Session on my site. https://stepinto.vision/learn-visionos/#hands
Topic: Spatial Computing SubTopic: ARKit Tags:
Mar ’25
Reply to ECS and array of gestures
You can use gestures in your systems. Apple has a couple of example projects that show some methods for creating components and systems that use SwiftUI gestures Transforming RealityKit entities using gestures https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures While not the focus of the example, this particle example also has an interesting system. Simulating particles in your visionOS app https://developer.apple.com/documentation/realitykit/simulating-particles-in-your-visionos-app Both of these have been helpful in learning how to use gestures from within a system.
Topic: Spatial Computing SubTopic: General Tags:
Feb ’25
Reply to A question about interacting with entity
We can use ARKit hand tracking or use AnchorEntity with SpatialTrackingSession. Here is an example with SpatialTrackingSession. This adds some anchors to the users hands, then enables those anchors to collide with other entities in the scene. Once you detect the collisions, you can execute some code to show your window or attachment. Spatial Tracking Session Anchor Entity Important: make sure to set this value to none of the anchor will not be able to interact with other entities. leftIndexAnchor.anchoring.physicsSimulation = .none This example uses trigger collisions instead of physics. The entities were created in Reality Composer Pro, then loaded in the RealityView. struct Example021: View { var body: some View { RealityView { content in if let scene = try? await Entity(named: "HandTrackingLabs", in: realityKitContentBundle) { content.add(scene) // 1. Set up a Spatial Tracking Session with hand tracking. // This will add ARKit features to our Anchor Entities, enabling collisions. let configuration = SpatialTrackingSession.Configuration( tracking: [.hand]) let session = SpatialTrackingSession() await session.run(configuration) if let subject = scene.findEntity(named: "StepSphereRed"), let stepSphereBlue = scene.findEntity(named: "StepSphereBlue"), let stepSphereGreen = scene.findEntity(named: "StepSphereGreen") { content.add(subject) // 2. Create an anchor for the left index finger let leftIndexAnchor = AnchorEntity(.hand(.left, location: .indexFingerTip), trackingMode: .continuous) // 3. Disable the default physics simulation on the anchor leftIndexAnchor.anchoring.physicsSimulation = .none // 4. Add the sphere to the anchor and add the anchor to the scene graph leftIndexAnchor.addChild(stepSphereBlue) content.add(leftIndexAnchor) // Repeat the same steps for the right index finger let rightIndexAnchor = AnchorEntity(.hand(.right, location: .indexFingerTip), trackingMode: .continuous) rightIndexAnchor.anchoring.physicsSimulation = .none // rightIndexAnchor.addChild(stepSphereGreen) content.add(rightIndexAnchor) // Example 1: Any entity can collide with any entity. Fire a particle burst // Allow collision between the hand anchors // Allow collision between a hand anchor and the subject _ = content.subscribe(to: CollisionEvents.Began.self) { collisionEvent in print("Collision unfiltered \(collisionEvent.entityA.name) and \(collisionEvent.entityB.name)") collisionEvent.entityA.components[ParticleEmitterComponent.self]?.burst() } // Example 2: Only track collisions on the subject. Swap the color of the material based on left or right hand. _ = content .subscribe(to: CollisionEvents.Began.self, on: subject) { collisionEvent in print("Collision Subject Color Change \(collisionEvent.entityA.name) and \(collisionEvent.entityB.name)") if(collisionEvent.entityB.name == "StepSphereBlue") { swapColorEntity(subject, color: .stepBlue) } else if (collisionEvent.entityB.name == "StepSphereGreen") { swapColorEntity(subject, color: .stepGreen) } } } } } } func swapColorEntity(_ entity: Entity, color: UIColor) { if var mat = entity.components[ModelComponent.self]?.materials.first as? PhysicallyBasedMaterial { mat.baseColor = .init(tint: color) entity.components[ModelComponent.self]?.materials[0] = mat } } }
Topic: Spatial Computing SubTopic: ARKit Tags:
Feb ’25
Reply to DragGesture that pivots with the user in visionOS
I found an alternative method for this in the particle example project. Instead of using value.gestureValue.translation3D to move the entity, this version uses value.location3D and value.startLocation3D. It’s not quite as good as the gesture Apple uses on Windows and Volumes. However, it is far better than what I’ve been using until now. I'd love to hear any ideas for how to improve this struct Example046: View { var body: some View { RealityView { content in if let scene = try? await Entity(named: "GestureLabs", in: realityKitContentBundle) { content.add(scene) // Lower the entire scene to the bottom of the volume scene.position = [1, 1, -1.5] } } .modifier(DragGestureWithPivot046()) } } fileprivate struct DragGestureWithPivot046: ViewModifier { @State var isDragging: Bool = false @State var initialPosition: SIMD3<Float> = .zero func body(content: Content) -> some View { content .gesture( DragGesture() .targetedToAnyEntity() .onChanged { value in // We we start the gesture, cache the entity position if !isDragging { isDragging = true initialPosition = value.entity.position } guard let entityParent = value.entity.parent else { return } // The current location: where we are in the gesture let gesturePosition = value.convert(value.location3D, from: .global, to: entityParent) // Minus the start location of the gesture let deltaPosition = gesturePosition - value.convert(value.startLocation3D, from: .global, to: entityParent) // Plus the initial position of the entity let newPos = initialPosition + deltaPosition // Optional: using move(to:) to smooth out the movement let newTransform = Transform( scale: value.entity.scale, rotation: value.entity.orientation, translation: newPos ) value.entity.move(to: newTransform, relativeTo: entityParent, duration: 0.1) // Or set the position directly // value.entity.position = newPos } .onEnded { value in // Clean up when the gesture has ended isDragging = false initialPosition = .zero } ) } }
Topic: Spatial Computing SubTopic: General Tags:
Feb ’25
Reply to Reading scenePhase from custom Scene
Are you using scene phase in the extra window too? You have to implemented separately in each window. The code above only showed it in the MyScene window. I like to set up a central bit of state to track my the open status of my scenes. I made an example of this a while back. Hope it helps! https://github.com/radicalappdev/Step-Into-Example-Projects/tree/main/Garden06
Topic: Spatial Computing SubTopic: General Tags:
Feb ’25