Post

Replies

Boosts

Views

Activity

Reply to VisionOS hands replacement inside an Immersive Space
We can create features like this using hand anchors. I can show you a few details on the RealityKit side, but creating the 3D hand assets would need to be done in Blender or some other 3D modeling app. There are two ways to access these anchors. Using ARKit: this is a good place to start Using AnchorEntity or Anchoring Component to access hands. If all you need is attach visual items to the hands, this option is great. You don't need to request permission to use these anchors unless you want additional tracking data like transforms, physics, collisions. Example: Create and customize entities for each primary location on the left hand if let leftHandSphere = scene.findEntity(named: "StepSphereBlue") { let indexTipAnchor = AnchorEntity(.hand(.left, location: .indexFingerTip), trackingMode: .continuous) indexTipAnchor.addChild(leftHandSphere) content.add(indexTipAnchor) let palmAnchor = AnchorEntity(.hand(.left, location: .palm), trackingMode: .continuous) palmAnchor.addChild(leftHandSphere.clone(recursive: true)) palmAnchor.position = [0, 0.05, 0] palmAnchor.scale = [3, 3, 3] content.add(palmAnchor) let thumbTipAnchor = AnchorEntity(.hand(.left, location: .thumbTip), trackingMode: .continuous) thumbTipAnchor.addChild(leftHandSphere.clone(recursive: true)) content.add(thumbTipAnchor) let wristAnchor = AnchorEntity(.hand(.left, location: .wrist), trackingMode: .continuous) wristAnchor.addChild(leftHandSphere.clone(recursive: true)) wristAnchor.scale = [3, 3, 3] content.add(wristAnchor) let aboveHandAnchor = AnchorEntity(.hand(.left, location: .aboveHand), trackingMode: .continuous) aboveHandAnchor.addChild(leftHandSphere.clone(recursive: true)) aboveHandAnchor.scale = [2, 2, 2] content.add(aboveHandAnchor) } Example: Create an entity for each joint anchor on the right hand if let rightHandSphere = scene.findEntity(named: "StepSphereGreen") { // In ARKit, joints are availble as a ENUM HandSkeleton.JointName.allCases // But in RealityKit we are not so lucky. Create an array of all joints to iterate over. let joints: [AnchoringComponent.Target.HandLocation.HandJoint] = [ .forearmArm, .forearmWrist, .indexFingerIntermediateBase, .indexFingerIntermediateTip, .indexFingerKnuckle, .indexFingerMetacarpal, .indexFingerTip, .littleFingerIntermediateBase, .littleFingerIntermediateTip, .littleFingerKnuckle, .littleFingerMetacarpal, .littleFingerTip, .middleFingerIntermediateBase, .middleFingerIntermediateTip, .middleFingerKnuckle, .middleFingerMetacarpal, .middleFingerTip, .ringFingerIntermediateBase, .ringFingerIntermediateTip, .ringFingerKnuckle, .ringFingerMetacarpal, .ringFingerTip, .thumbIntermediateBase, .thumbIntermediateTip, .thumbKnuckle, .thumbTip, .wrist ] for joint in joints { let anchor = AnchorEntity( .hand(.right, location: .joint(for: joint)), trackingMode: .continuous ) anchor.addChild(rightHandSphere.clone(recursive: true)) anchor.position = rightHandSphere.position content.add(anchor) } }
Topic: Spatial Computing SubTopic: ARKit Tags:
Feb ’25
Reply to How to move a camera in immersive space and render its output on 2D window using RealityKit
My understanding is that those cameras are not relevant for visionOS development. Those are used on iOS and other platforms. When I tried to do this last winter it seemed that the only answer was to move the world around the user, instead of moving the user around the world. Here is an excerpt from from the code I came up with. This allows as user to tap on an entity and move to a new position. Sort of like "waypoint" teleportation that was common in VR games circa 2016-2017. This could be improved in lots of ways. For example, using SpatialEventGesture or SpatialTapGesture to get a more precise location. struct Lab5017: View { @State var selected: String = "Tap Something" @State var sceneContent: Entity? @State var sceneContentPosition: SIMD3<Float> = [0,0,0] var tap: some Gesture { SpatialTapGesture() .targetedToAnyEntity() .onEnded { value in selected = value.entity.name // Calculate the vector from the origin to the tapped position let vectorToTap = value.entity.position // Normalize the vector to get a direction from the origin to the tapped position let direction = normalize(vectorToTap) // Calculate the distance (or magnitude) between the origin and the tapped position let distance = length(vectorToTap) // Calculate the new position by inverting the direction multiplied by the distance let newPosition = -direction * distance // Update sceneOffset's X and Z components, leave Y as it is sceneContentPosition.x = newPosition.x sceneContentPosition.z = newPosition.z } } var body: some View { RealityView { content, attachments in if let model = try? await Entity(named: "5017Move", in: realityKitContentBundle) { content.add(model) // Get the scene content and stash it in state if let floorParent = model.findEntity(named: "SceneContent") { sceneContent = floorParent sceneContentPosition = floorParent.position } } //Position the attachment somewhere we can see it if let attachmentEntity = attachments.entity(for: "SelectedLabel") { attachmentEntity.position = [0.8, 1.5, -2] attachmentEntity.scale = [5,5,5] content.add(attachmentEntity) } } update: { content, attachments in // Update the position of scene content anytime we get a new position sceneContent?.position = sceneContentPosition } attachments: { Attachment(id: "SelectedLabel") { Text(selected) .font(.largeTitle) .padding(18) .background(.black) .cornerRadius(12) } } .gesture(tap) // The floor child entities can receive input, so this gesture will fire when we tap them } }
Topic: Spatial Computing SubTopic: General Tags:
Feb ’25
Reply to Cast virtual light on real-world environments in VisionOS/RealityKit?
By default, these dynamic lights will not affect the passthrough environment. But there is an interesting workaround that we can use. We can use an entity either a shader graph material that uses the "ShadowReceivingOcclusionSurface" node. This small demo scene has a 1x0.1x1 cube that is using that material. Then I dropped in a spotlight and some other cubes to block the light. The spotlight can shine only the surface of the object using the occlusion material. This works with shadows cast by virtual objects too, and it doesn't require the grounding shadow component. The challenge will be determining what mesh/object to use for the occlusion material. Depending on your use case, simple shapes may work. Or you may want to use planes or the room mesh from ARKit.
Topic: Spatial Computing SubTopic: General Tags:
Jan ’25
Reply to How to get the floor plane with Spatial Tracking Session and Anchor Entity
@Vision Pro Engineer Hi, thanks for the response. I have a few questions and responses it's usually better to put the code that configures the SpatialTrackingSession into a class instead of an @State property on your view Sure, I would do that in most apps. This is just an example where I was trying to keep everything in one file. Do you have any details on the why it is better to place SpatialTrackingSession in an observable class instead of state on a view? Several of the WWDC sessions and examples store the session in the view and I was using them as a starting point. SpatialTapGesture Just to clarify, that was just an example from "Deep dive into volumes and immersive spaces" (WWDC 2024). They showed using the tap gesture on an anchor but didn't show how the collision was created. I wasn't using a gesture in my scene at all. I was just using this as an example of something that obviously needed a collision shape. But the session obscured the details. this CollisionComponent(shapes: .init()) is creating a collision component with an empty array Yes, I was trying to create a collision, the populate it later with a call to generateCollisionShapes event.anchor.generateCollisionShapes(recursive: true) If I understand correctly this doesn't work because the AnchorEntity is a point on a plane, not a the plane it self. Is that correct? Your hard coded ShapeResouce is interesting, but it doesn't help me create a collision shape that matches the physical floor in my office. This results in an arbitrary shape, positioned by a system I can't predict, to create a floor that may or may not cover the floor in the real room. Is it possible to use an AnchorEntity (with SpatialTrackingSession) to get the plane/bounds/rect of the floor that visionOS detected? So far my guess is no. It seems like AnchorEntity is actually an arbitrary point/transform on that detected plane.
Topic: Spatial Computing SubTopic: ARKit Tags:
Jan ’25
Reply to How can we move the player within a RealityKit/RealityView scene?
I haven't found a way to gain access to the camera or player entity from the context of RealityView. In the meantime, I put together a quick demo that moves content in a USDA scene to the player. Move the world instead of the player. struct Lab5017: View { @State var selected: String = "Tap Something" @State var sceneContent: Entity? @State var sceneContentPosition: SIMD3<Float> = [0,0,0] var tap: some Gesture { SpatialTapGesture() .targetedToAnyEntity() .onEnded { value in selected = value.entity.name // Calculate the vector from the origin to the tapped position let vectorToTap = value.entity.position // Normalize the vector to get a direction from the origin to the tapped position let direction = normalize(vectorToTap) // Calculate the distance (or magnitude) between the origin and the tapped position let distance = length(vectorToTap) // Calculate the new position by inverting the direction multiplied by the distance let newPosition = -direction * distance // Update sceneOffset's X and Z components, leave Y as it is sceneContentPosition.x = newPosition.x sceneContentPosition.z = newPosition.z } } var body: some View { RealityView { content, attachments in if let model = try? await Entity(named: "5017Move", in: realityKitContentBundle) { content.add(model) // Get the scene content and stash it in state if let floorParent = model.findEntity(named: "SceneContent") { sceneContent = floorParent sceneContentPosition = floorParent.position } } //Position the attachment somewhere we can see it if let attachmentEntity = attachments.entity(for: "SelectedLabel") { attachmentEntity.position = [0.8, 1.5, -2] attachmentEntity.scale = [5,5,5] content.add(attachmentEntity) } } update: { content, attachments in // Update the position of scene content anytime we get a new position sceneContent?.position = sceneContentPosition } attachments: { Attachment(id: "SelectedLabel") { Text(selected) .font(.largeTitle) .padding(18) .background(.black) .cornerRadius(12) } } .gesture(tap) // The floor child entities can receive input, so this gesture will fire when we tap them } }
Topic: Graphics & Games SubTopic: RealityKit Tags:
Jan ’24