Post

Replies

Boosts

Views

Activity

Reply to How can we move the player within a RealityKit/RealityView scene?
I haven't found a way to gain access to the camera or player entity from the context of RealityView. In the meantime, I put together a quick demo that moves content in a USDA scene to the player. Move the world instead of the player. struct Lab5017: View { @State var selected: String = "Tap Something" @State var sceneContent: Entity? @State var sceneContentPosition: SIMD3<Float> = [0,0,0] var tap: some Gesture { SpatialTapGesture() .targetedToAnyEntity() .onEnded { value in selected = value.entity.name // Calculate the vector from the origin to the tapped position let vectorToTap = value.entity.position // Normalize the vector to get a direction from the origin to the tapped position let direction = normalize(vectorToTap) // Calculate the distance (or magnitude) between the origin and the tapped position let distance = length(vectorToTap) // Calculate the new position by inverting the direction multiplied by the distance let newPosition = -direction * distance // Update sceneOffset's X and Z components, leave Y as it is sceneContentPosition.x = newPosition.x sceneContentPosition.z = newPosition.z } } var body: some View { RealityView { content, attachments in if let model = try? await Entity(named: "5017Move", in: realityKitContentBundle) { content.add(model) // Get the scene content and stash it in state if let floorParent = model.findEntity(named: "SceneContent") { sceneContent = floorParent sceneContentPosition = floorParent.position } } //Position the attachment somewhere we can see it if let attachmentEntity = attachments.entity(for: "SelectedLabel") { attachmentEntity.position = [0.8, 1.5, -2] attachmentEntity.scale = [5,5,5] content.add(attachmentEntity) } } update: { content, attachments in // Update the position of scene content anytime we get a new position sceneContent?.position = sceneContentPosition } attachments: { Attachment(id: "SelectedLabel") { Text(selected) .font(.largeTitle) .padding(18) .background(.black) .cornerRadius(12) } } .gesture(tap) // The floor child entities can receive input, so this gesture will fire when we tap them } }
Topic: Graphics & Games SubTopic: RealityKit Tags:
Jan ’24
Reply to How to get the floor plane with Spatial Tracking Session and Anchor Entity
@Vision Pro Engineer Hi, thanks for the response. I have a few questions and responses it's usually better to put the code that configures the SpatialTrackingSession into a class instead of an @State property on your view Sure, I would do that in most apps. This is just an example where I was trying to keep everything in one file. Do you have any details on the why it is better to place SpatialTrackingSession in an observable class instead of state on a view? Several of the WWDC sessions and examples store the session in the view and I was using them as a starting point. SpatialTapGesture Just to clarify, that was just an example from "Deep dive into volumes and immersive spaces" (WWDC 2024). They showed using the tap gesture on an anchor but didn't show how the collision was created. I wasn't using a gesture in my scene at all. I was just using this as an example of something that obviously needed a collision shape. But the session obscured the details. this CollisionComponent(shapes: .init()) is creating a collision component with an empty array Yes, I was trying to create a collision, the populate it later with a call to generateCollisionShapes event.anchor.generateCollisionShapes(recursive: true) If I understand correctly this doesn't work because the AnchorEntity is a point on a plane, not a the plane it self. Is that correct? Your hard coded ShapeResouce is interesting, but it doesn't help me create a collision shape that matches the physical floor in my office. This results in an arbitrary shape, positioned by a system I can't predict, to create a floor that may or may not cover the floor in the real room. Is it possible to use an AnchorEntity (with SpatialTrackingSession) to get the plane/bounds/rect of the floor that visionOS detected? So far my guess is no. It seems like AnchorEntity is actually an arbitrary point/transform on that detected plane.
Topic: Spatial Computing SubTopic: ARKit Tags:
Jan ’25
Reply to Cast virtual light on real-world environments in VisionOS/RealityKit?
By default, these dynamic lights will not affect the passthrough environment. But there is an interesting workaround that we can use. We can use an entity either a shader graph material that uses the "ShadowReceivingOcclusionSurface" node. This small demo scene has a 1x0.1x1 cube that is using that material. Then I dropped in a spotlight and some other cubes to block the light. The spotlight can shine only the surface of the object using the occlusion material. This works with shadows cast by virtual objects too, and it doesn't require the grounding shadow component. The challenge will be determining what mesh/object to use for the occlusion material. Depending on your use case, simple shapes may work. Or you may want to use planes or the room mesh from ARKit.
Topic: Spatial Computing SubTopic: General Tags:
Jan ’25
Reply to How to move a camera in immersive space and render its output on 2D window using RealityKit
My understanding is that those cameras are not relevant for visionOS development. Those are used on iOS and other platforms. When I tried to do this last winter it seemed that the only answer was to move the world around the user, instead of moving the user around the world. Here is an excerpt from from the code I came up with. This allows as user to tap on an entity and move to a new position. Sort of like "waypoint" teleportation that was common in VR games circa 2016-2017. This could be improved in lots of ways. For example, using SpatialEventGesture or SpatialTapGesture to get a more precise location. struct Lab5017: View { @State var selected: String = "Tap Something" @State var sceneContent: Entity? @State var sceneContentPosition: SIMD3<Float> = [0,0,0] var tap: some Gesture { SpatialTapGesture() .targetedToAnyEntity() .onEnded { value in selected = value.entity.name // Calculate the vector from the origin to the tapped position let vectorToTap = value.entity.position // Normalize the vector to get a direction from the origin to the tapped position let direction = normalize(vectorToTap) // Calculate the distance (or magnitude) between the origin and the tapped position let distance = length(vectorToTap) // Calculate the new position by inverting the direction multiplied by the distance let newPosition = -direction * distance // Update sceneOffset's X and Z components, leave Y as it is sceneContentPosition.x = newPosition.x sceneContentPosition.z = newPosition.z } } var body: some View { RealityView { content, attachments in if let model = try? await Entity(named: "5017Move", in: realityKitContentBundle) { content.add(model) // Get the scene content and stash it in state if let floorParent = model.findEntity(named: "SceneContent") { sceneContent = floorParent sceneContentPosition = floorParent.position } } //Position the attachment somewhere we can see it if let attachmentEntity = attachments.entity(for: "SelectedLabel") { attachmentEntity.position = [0.8, 1.5, -2] attachmentEntity.scale = [5,5,5] content.add(attachmentEntity) } } update: { content, attachments in // Update the position of scene content anytime we get a new position sceneContent?.position = sceneContentPosition } attachments: { Attachment(id: "SelectedLabel") { Text(selected) .font(.largeTitle) .padding(18) .background(.black) .cornerRadius(12) } } .gesture(tap) // The floor child entities can receive input, so this gesture will fire when we tap them } }
Topic: Spatial Computing SubTopic: General Tags:
Feb ’25
Reply to VisionOS hands replacement inside an Immersive Space
We can create features like this using hand anchors. I can show you a few details on the RealityKit side, but creating the 3D hand assets would need to be done in Blender or some other 3D modeling app. There are two ways to access these anchors. Using ARKit: this is a good place to start Using AnchorEntity or Anchoring Component to access hands. If all you need is attach visual items to the hands, this option is great. You don't need to request permission to use these anchors unless you want additional tracking data like transforms, physics, collisions. Example: Create and customize entities for each primary location on the left hand if let leftHandSphere = scene.findEntity(named: "StepSphereBlue") { let indexTipAnchor = AnchorEntity(.hand(.left, location: .indexFingerTip), trackingMode: .continuous) indexTipAnchor.addChild(leftHandSphere) content.add(indexTipAnchor) let palmAnchor = AnchorEntity(.hand(.left, location: .palm), trackingMode: .continuous) palmAnchor.addChild(leftHandSphere.clone(recursive: true)) palmAnchor.position = [0, 0.05, 0] palmAnchor.scale = [3, 3, 3] content.add(palmAnchor) let thumbTipAnchor = AnchorEntity(.hand(.left, location: .thumbTip), trackingMode: .continuous) thumbTipAnchor.addChild(leftHandSphere.clone(recursive: true)) content.add(thumbTipAnchor) let wristAnchor = AnchorEntity(.hand(.left, location: .wrist), trackingMode: .continuous) wristAnchor.addChild(leftHandSphere.clone(recursive: true)) wristAnchor.scale = [3, 3, 3] content.add(wristAnchor) let aboveHandAnchor = AnchorEntity(.hand(.left, location: .aboveHand), trackingMode: .continuous) aboveHandAnchor.addChild(leftHandSphere.clone(recursive: true)) aboveHandAnchor.scale = [2, 2, 2] content.add(aboveHandAnchor) } Example: Create an entity for each joint anchor on the right hand if let rightHandSphere = scene.findEntity(named: "StepSphereGreen") { // In ARKit, joints are availble as a ENUM HandSkeleton.JointName.allCases // But in RealityKit we are not so lucky. Create an array of all joints to iterate over. let joints: [AnchoringComponent.Target.HandLocation.HandJoint] = [ .forearmArm, .forearmWrist, .indexFingerIntermediateBase, .indexFingerIntermediateTip, .indexFingerKnuckle, .indexFingerMetacarpal, .indexFingerTip, .littleFingerIntermediateBase, .littleFingerIntermediateTip, .littleFingerKnuckle, .littleFingerMetacarpal, .littleFingerTip, .middleFingerIntermediateBase, .middleFingerIntermediateTip, .middleFingerKnuckle, .middleFingerMetacarpal, .middleFingerTip, .ringFingerIntermediateBase, .ringFingerIntermediateTip, .ringFingerKnuckle, .ringFingerMetacarpal, .ringFingerTip, .thumbIntermediateBase, .thumbIntermediateTip, .thumbKnuckle, .thumbTip, .wrist ] for joint in joints { let anchor = AnchorEntity( .hand(.right, location: .joint(for: joint)), trackingMode: .continuous ) anchor.addChild(rightHandSphere.clone(recursive: true)) anchor.position = rightHandSphere.position content.add(anchor) } }
Topic: Spatial Computing SubTopic: ARKit Tags:
Feb ’25
Reply to Reading scenePhase from custom Scene
Are you using scene phase in the extra window too? You have to implemented separately in each window. The code above only showed it in the MyScene window. I like to set up a central bit of state to track my the open status of my scenes. I made an example of this a while back. Hope it helps! https://github.com/radicalappdev/Step-Into-Example-Projects/tree/main/Garden06
Topic: Spatial Computing SubTopic: General Tags:
Feb ’25
Reply to DragGesture that pivots with the user in visionOS
I found an alternative method for this in the particle example project. Instead of using value.gestureValue.translation3D to move the entity, this version uses value.location3D and value.startLocation3D. It’s not quite as good as the gesture Apple uses on Windows and Volumes. However, it is far better than what I’ve been using until now. I'd love to hear any ideas for how to improve this struct Example046: View { var body: some View { RealityView { content in if let scene = try? await Entity(named: "GestureLabs", in: realityKitContentBundle) { content.add(scene) // Lower the entire scene to the bottom of the volume scene.position = [1, 1, -1.5] } } .modifier(DragGestureWithPivot046()) } } fileprivate struct DragGestureWithPivot046: ViewModifier { @State var isDragging: Bool = false @State var initialPosition: SIMD3<Float> = .zero func body(content: Content) -> some View { content .gesture( DragGesture() .targetedToAnyEntity() .onChanged { value in // We we start the gesture, cache the entity position if !isDragging { isDragging = true initialPosition = value.entity.position } guard let entityParent = value.entity.parent else { return } // The current location: where we are in the gesture let gesturePosition = value.convert(value.location3D, from: .global, to: entityParent) // Minus the start location of the gesture let deltaPosition = gesturePosition - value.convert(value.startLocation3D, from: .global, to: entityParent) // Plus the initial position of the entity let newPos = initialPosition + deltaPosition // Optional: using move(to:) to smooth out the movement let newTransform = Transform( scale: value.entity.scale, rotation: value.entity.orientation, translation: newPos ) value.entity.move(to: newTransform, relativeTo: entityParent, duration: 0.1) // Or set the position directly // value.entity.position = newPos } .onEnded { value in // Clean up when the gesture has ended isDragging = false initialPosition = .zero } ) } }
Topic: Spatial Computing SubTopic: General Tags:
Feb ’25
Reply to A question about interacting with entity
We can use ARKit hand tracking or use AnchorEntity with SpatialTrackingSession. Here is an example with SpatialTrackingSession. This adds some anchors to the users hands, then enables those anchors to collide with other entities in the scene. Once you detect the collisions, you can execute some code to show your window or attachment. Spatial Tracking Session Anchor Entity Important: make sure to set this value to none of the anchor will not be able to interact with other entities. leftIndexAnchor.anchoring.physicsSimulation = .none This example uses trigger collisions instead of physics. The entities were created in Reality Composer Pro, then loaded in the RealityView. struct Example021: View { var body: some View { RealityView { content in if let scene = try? await Entity(named: "HandTrackingLabs", in: realityKitContentBundle) { content.add(scene) // 1. Set up a Spatial Tracking Session with hand tracking. // This will add ARKit features to our Anchor Entities, enabling collisions. let configuration = SpatialTrackingSession.Configuration( tracking: [.hand]) let session = SpatialTrackingSession() await session.run(configuration) if let subject = scene.findEntity(named: "StepSphereRed"), let stepSphereBlue = scene.findEntity(named: "StepSphereBlue"), let stepSphereGreen = scene.findEntity(named: "StepSphereGreen") { content.add(subject) // 2. Create an anchor for the left index finger let leftIndexAnchor = AnchorEntity(.hand(.left, location: .indexFingerTip), trackingMode: .continuous) // 3. Disable the default physics simulation on the anchor leftIndexAnchor.anchoring.physicsSimulation = .none // 4. Add the sphere to the anchor and add the anchor to the scene graph leftIndexAnchor.addChild(stepSphereBlue) content.add(leftIndexAnchor) // Repeat the same steps for the right index finger let rightIndexAnchor = AnchorEntity(.hand(.right, location: .indexFingerTip), trackingMode: .continuous) rightIndexAnchor.anchoring.physicsSimulation = .none // rightIndexAnchor.addChild(stepSphereGreen) content.add(rightIndexAnchor) // Example 1: Any entity can collide with any entity. Fire a particle burst // Allow collision between the hand anchors // Allow collision between a hand anchor and the subject _ = content.subscribe(to: CollisionEvents.Began.self) { collisionEvent in print("Collision unfiltered \(collisionEvent.entityA.name) and \(collisionEvent.entityB.name)") collisionEvent.entityA.components[ParticleEmitterComponent.self]?.burst() } // Example 2: Only track collisions on the subject. Swap the color of the material based on left or right hand. _ = content .subscribe(to: CollisionEvents.Began.self, on: subject) { collisionEvent in print("Collision Subject Color Change \(collisionEvent.entityA.name) and \(collisionEvent.entityB.name)") if(collisionEvent.entityB.name == "StepSphereBlue") { swapColorEntity(subject, color: .stepBlue) } else if (collisionEvent.entityB.name == "StepSphereGreen") { swapColorEntity(subject, color: .stepGreen) } } } } } } func swapColorEntity(_ entity: Entity, color: UIColor) { if var mat = entity.components[ModelComponent.self]?.materials.first as? PhysicallyBasedMaterial { mat.baseColor = .init(tint: color) entity.components[ModelComponent.self]?.materials[0] = mat } } }
Topic: Spatial Computing SubTopic: ARKit Tags:
Feb ’25
Reply to ECS and array of gestures
You can use gestures in your systems. Apple has a couple of example projects that show some methods for creating components and systems that use SwiftUI gestures Transforming RealityKit entities using gestures https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures While not the focus of the example, this particle example also has an interesting system. Simulating particles in your visionOS app https://developer.apple.com/documentation/realitykit/simulating-particles-in-your-visionos-app Both of these have been helpful in learning how to use gestures from within a system.
Topic: Spatial Computing SubTopic: General Tags:
Feb ’25
Reply to ARKit hand tracking
Hands on visionOS are represented as a series of anchors, and there are a lot of them! Each anchor has a transform. These anchors are located around the hand at positions like palm, wrist, index finger tip, etc. There are two main ways we can work with these anchors. Option 1: Anchoring Component (or AnchorEntity) + Spatial Tracking Session This is a simple way to start working with the transforms for one or more AnchorEntities. Start a Spatial Tracking Session. This will enable access to the transform of our anchors. let configuration = SpatialTrackingSession.Configuration( tracking: [.hand]) let session = SpatialTrackingSession() await session.run(configuration) Add an anchor. This example adds an anchor to the left hand index finger. if let leftHandSphere = scene.findEntity(named: "LeftHand") { let leftHand = AnchorEntity(.hand(.left, location: .indexFingerTip)) leftHand.addChild(leftHandSphere) content.add(leftHand) } Access the transform let leftIndexTransform = Transform(matrix: anchor.transformMatrix(relativeTo: nil)) Important: if you are using collisions or physics, then you will also want to disable the default physics simulation for the anchor. Without this, the hand anchor won't be able to collide with other entities in the scene. leftIndexAnchor.anchoring.physicsSimulation = .none Option 2: Use ARKit directly This is a bit more involved, but gives is more control. This example uses an ARKitSession and adds a sphere to each finger tip. struct Example017: View { let arSession = ARKitSession() let handTrackingProvider = HandTrackingProvider() let leftCollection = Entity() let rightCollection = Entity() let tipJoints: [HandSkeleton.JointName] = [ .thumbTip, .indexFingerTip, .middleFingerTip, .ringFingerTip, .littleFingerTip ] var body: some View { RealityView { content in content.add(leftCollection) content.add(rightCollection) if let scene = try? await Entity(named: "HandTrackingLabs", in: realityKitContentBundle) { content.add(scene) if let leftHandSphere = scene.findEntity(named: "StepSphereBlue") { // Create clones of the left hand sphere for each joint for jointName in tipJoints { let sphere = leftHandSphere.clone(recursive: true) sphere.name = jointName.description leftCollection.addChild(sphere) } leftHandSphere.isEnabled = false } if let rightHandSphere = scene.findEntity(named: "StepSphereGreen") { // Create clones of the right hand sphere for each joint for jointName in tipJoints { let sphere = rightHandSphere.clone(recursive: true) sphere.name = jointName.description rightCollection.addChild(sphere) } rightHandSphere.isEnabled = false } } } .persistentSystemOverlays(.hidden) .task { try! await arSession.run([handTrackingProvider]) } // Left Hand: Receive updates from the provider and process them over time .task { for await update in handTrackingProvider.anchorUpdates where update.anchor.chirality == .left { let handAnchor = update.anchor for jointName in tipJoints { if let joint = handAnchor.handSkeleton?.joint(jointName), let sphere = leftCollection.findEntity(named: jointName.description) { let transform = handAnchor.originFromAnchorTransform let jointTransform = joint.anchorFromJointTransform sphere.setTransformMatrix(transform * jointTransform, relativeTo: nil) } } } } // Right Hand: Receive updates from the provider and process them over time .task { for await update in handTrackingProvider.anchorUpdates where update.anchor.chirality == .right { let handAnchor = update.anchor for jointName in tipJoints { if let joint = handAnchor.handSkeleton?.joint(jointName), let sphere = rightCollection.findEntity(named: jointName.description) { let transform = handAnchor.originFromAnchorTransform let jointTransform = joint.anchorFromJointTransform sphere.setTransformMatrix(transform * jointTransform, relativeTo: nil) } } } } } } Resources AnchorEntity https://developer.apple.com/documentation/realitykit/anchorentity SpatialTrackingSession https://developer.apple.com/documentation/RealityKit/SpatialTrackingSession ARKit Hand Tracking from Apple https://developer.apple.com/documentation/visionos/tracking-and-visualizing-hand-movement I have several examples using AnchorEntity and Spatial Tracking Session on my site. https://stepinto.vision/learn-visionos/#hands
Topic: Spatial Computing SubTopic: ARKit Tags:
Mar ’25
Reply to App Window Closure Sequence Impacts Main Interface Reload Behavior
I had a similar issue in my app (Project Graveyard) which has a main volume and utility window to edit content. I solved this by using some shared state (AppModel) and ScenePhase. What I ended up with was the ability to reopen the main window from the utility window OR open the utility window from the main window. The first thing to keep in mind is that ScenePhase works differently when used at the app level (some Scene) vs. when using it I a view inside a window, volume, or space. visionOS has a lot of bugs (reported) about the app level uses. I was able to create by solution by using ScenePhase in my views and sharing some state in the AppModel. Here is a breakdown Add to AppModel var mainWindowOpen: Bool = true var yellowFlowerOpen: Bool = false The root view of the main window (ContentView in this case) @Environment(\.scenePhase) private var scenePhase Then listen for scenePhase using onChange, writing to to the mainWindowOpen bool from appModel. .onChange(of: scenePhase, initial: true) { switch scenePhase { case .inactive, .background: appModel.mainWindowOpen = false case .active: appModel.mainWindowOpen = true @unknown default: appModel.mainWindowOpen = false } } We do the same thing in the root view for the other window (or volume) @Environment(\.scenePhase) private var scenePhase Then listen to scene phase .onChange(of: scenePhase, initial: true) { switch scenePhase { case .inactive, .background: appModel.yellowFlowerOpen = false case .active: appModel.yellowFlowerOpen = true @unknown default: appModel.yellowFlowerOpen = false } } You can download this as an Xcode project if you want to try it our before trying to reproduce it. https://github.com/radicalappdev/Step-Into-Example-Projects/tree/main/Garden06 There is a also a video available on my website (I really wish we could upload short videos here) https://stepinto.vision/example-code/how-to-use-scene-phase-to-track-and-manage-window-state/
Topic: Spatial Computing SubTopic: General Tags:
Mar ’25
Reply to Difference between Head and Device tracking on visionOS
Apple has a neat sample project that shows have an entity follow based on head movements. It touches on the detail between the AnchorEntity and the DeviceAnchor. https://developer.apple.com/documentation/visionos/placing-entities-using-head-and-device-transform Hopefully, visionOS 3 will bring SpatialTrackingSession data to the head AnchorEntity position, just like we have with hand anchors now. (Feedback: FB16870381)
Topic: Spatial Computing SubTopic: ARKit Tags:
Mar ’25
Reply to The occulution relationship of virtual content and real object
We can use OcclusionMaterial to solve issues like this. Essentially, we can use ARKit features to get meshes that describe the real world environment, the assign OcclusionMaterial to them. Apple has a neat example app you can try and a short tutorial that describes the process. Obscuring virtual items in a scene behind real-world items Bonus: You can also create your own material using ShaderGraph in Reality Composer Pro. There are two nodes that we can use. Occlusion surface Shadow receiving occlusion surface - use this one if your app needs to cast shadows or shine virtual lights on the environment.
Topic: Spatial Computing SubTopic: ARKit Tags:
Mar ’25
Reply to Unable to Retain Main App Window State When Transitioning to Immersive Space
Unfortunately, there aren't too many great options for this in visionOS right now. I've approached this in two ways so far. Idea 1: manage the opacity / alpha of the main window. Apple did this in the Hello World. This is only suitable for quick trips into a space where the user won't be interacting with much. They may still see the window bar even though the window contents are hidden. Some tips to improve this option Use plain window style to remove the glass background, then add your own glass background when you want to show content try using .persistentSystemOverlays(.hidden) to hide the window bar when you're in the space Idea 2: keep track of window state and restore it when reopening the main window. You could keep track of navigation history, scroll position, view state, etc. This could be a great option and is less of a hack. But if your window has very complex views and hierarchy, this could be end up being very complex.
Topic: Spatial Computing SubTopic: General Tags:
Mar ’25