Hello,
I'm curious if anyone has some useful debug tools for out-of-bounds issues with Volumes.
I am opening a volume with a size of 1m, 1m, 10cm.
I am adding a RealityView with a ModelEntity that is 0.5m tall and I am seeing the model clip at the top and bottom. I find this odd, because I feel like it should be within the size of the Volume....
I was curious what size SwiftUI says the Volume's size is so I tried using a GeometryReader3D to tell me....
GeometryReader3D { proxy in
VStack {
Text("\(proxy.size.width)")
Text("\(proxy.size.height)")
Text("\(proxy.size.depth)")
}
.padding().glassBackgroundEffect()
}
Unfortunately I get, 680, 1360, and 68. I'm guessing these units are in points, but that's not very helpful. The documentation says to use real-world units for Volumes, but none of the SwiftUI frame setters and getters appear to support different units.
Is there a way to convert between the two? I'm not clear if this is a bug or a feature suggestion.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
I've noticed that the bounds of my ModelEntity is not impacted when I transform one of the mesh's Joints.
I've attached an image those demonstrates this. It appears it will cause issues with the collision bounding box as well.
Is this a bug or user error? Not many of this framework's methods are documented.
Disclaimer: I am new to all things 3D. There could be a variety of things wrong with what I'm doing that are not unique to RealityKit. Any domain info would be appreciated.
So, I'm following, what I think are, the recommended steps to import a shader-node material from reality composer pro and apply it to another modelEntity.
I do the following:
guard let entity = try? Entity.load(named: "Materials", in: RealityKitContent.realityKitContentBundle) else { return model }
let materialEntity = entity.findEntity(named: "materialModel") as? ModelEntity
guard let materialEntity else { return model }
I then configure a property on it like so:
guard var material = materialEntity.model?.materials[0] as? ShaderGraphMaterial else { return model }
try coreMaterial.setParameter(name: "BaseColor", value: .color(matModel.matCoreUIColor))
I then apply it.
This is what my texture looks like in RealityComposer:
I notice that my rendered object has distortions in the actual RealityView. Note the diagonal lines that appear "Stretched".
What could be doing this? I thought Node Shaders were supposed to be more resilient to distortions like this? I'm not sure if I've got a bug or if I'm using it wrong.
FWIW, this is a shader based on apple's felt material shader. My graph looks like this:
Thanks
While PlaneAnchors are still not generated by the PlaneDetectionProvider in the simulator, I am still brainstorming how to detect a tap on one of the planes.
In an iOS ARKit application I could use a raycastQuery on existingPlaneGeometry to make an anchor with the raycast result's world transform.
I've not yet found the VisionOS replacement for this.
A possible hunch is that I need to install my own mesh-less PlaneModelEntities for each planeAnchor that's returned by the PlaneDetectionProvider. From there I can use a TapGesture targeted to those models? And then I could build a an WorldAnchor from the tap location on those entities.
Anyone have any ideas?
World Anchor from SpatialTapGesture ??
At 19:56 in the video, it's mentioned that we can use a SpatialTapGesture to "identify a position in the world" to make a world anchor.
Which API calls are utilized to make this happen?
World anchors are created with 4x4 matrices, and a SpatialTapGestures doesn't seem to generate one of those.
Any ideas?
I am trying to make a world anchor where a user taps a detected plane.
How am I trying this?
First, I add an entity to a RealityView like so:
let anchor = AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous)
anchor.transform.rotation *= simd_quatf(angle: -.pi / 2, axis: SIMD3<Float>(1, 0, 0))
let interactionEntity = Entity()
interactionEntity.name = "PLANE"
let collisionComponent = CollisionComponent(shapes: [ShapeResource.generateBox(width: 2.0, height: 2.0, depth: 0.02)])
interactionEntity.components.set(collisionComponent)
interactionEntity.components.set(InputTargetComponent())
anchor.addChild(interactionEntity)
content.add(anchor)
This:
Declares an anchor that requires a wall 2 meters by 2 meters to appear in the scene with continuous tracking
Makes an empty entity and gives it a 2m by 2m by 2cm collision box
Attaches the collision entity to the anchor
Finally then adds the anchor to the scene
It appears in the scene like this:
Great! Appears to sit right on the wall.
I then add a tap gesture recognizer like this:
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { value in
guard value.entity.name == "PLANE" else { return }
var worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene)
let pose = Pose3D(position: worldPosition, rotation: value.entity.transform.rotation)
let worldAnchor = WorldAnchor(transform: simd_float4x4(pose))
let model = ModelEntity(mesh: .generateBox(size: 0.1, cornerRadius: 0.03), materials: [SimpleMaterial(color: .blue, isMetallic: true)])
model.transform = Transform(matrix: worldAnchor.transform)
realityViewContent?.add(model)
I ASSUME This:
Makes a world position from the where the tap connects with the collision entity.
Integrates the position and the collision plane's rotation to create a Pose3D.
Makes a world anchor from that pose (So it can be persisted in a world tracking provider)
Then I make a basic cube entity and give it that transform.
Weird Stuff: It doesn't appear on the plane.. it appears behind it...
Why, What have I done wrong?
The X and Y of the tap location appears spot on, but something is "off" about the z position.
Also, is there a recommended way to debug this with the available tools?
I'm guessing I'll have to file a DTS about this because feedback on the forum has been pretty low since labs started.
Hello,
With the advent of widget interactivity, in order to support state management, I'd like to differentiate one widget from another, even if they share the same configuration.
Is this possible? Many of my search results are turning up iOS 15 era information, and I am not sure if that's still valid.
Thank you
Attempting to launch a widget in Debug mode on Sonoma from Xcode 15 is failing with the following message:
attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
Looking in console I see this message:
macOSTaskPolicy: (com.apple.debugserver) may not get the task control port of (MacGalleryWidget) (pid: 1851): (MacGalleryWidget) is hardened, (MacGalleryWidget) doesn't have get-task-allow, (com.apple.debugserver) is a declared debugger(com.apple.debugserver) is not a declared read-only debugger
What Xcode settings should I be looking at to rectify this? I suspect I may have something that's out of whack.
The Goal
My goal is to place an item where the user taps on a plane, and have that item match the outward facing normal-vector where the user tapped.
In beta 3 a 3D Spatial Tap Gesture now returns an accurate Location3D, so determining the position to place an item is working great. I simply do:
let worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene)
The Problem
Now, I notice that my entities aren't oriented correctly:
The placed item always 'faces' the camera. So if the camera isn't looking straight on the target plane, then the orientation of the new entity is off.
If I retrieve the transform of my newly placed item it says the rotation relative to 'nil' is 0,0,0, which.... doesn't look correct?
I know I'm dealing with different Coordinate systems of the plane being tapped, the world coordinate system, and the item being placed and I'm getting a bit lost in it all. Not to mention my API intuition is still pretty low, so quats are still new to me.
So, I'm curious, what rotation information can I use to "correct" the placed entity's orientation?
What I tried:
I've tried investigating the tap-target-entity like so:
let rotationRelativeToWorld = value.entity.convert(transform: value.entity.transform, to: nil).rotation
I believe this returns the rotation of the "plane entity" the user tapped, relative to the world.
While that get's me the following, I'm not sure if it's useful?
rotationRelativeToWorld:
▿ simd_quatf(real: 0.7071068, imag: SIMD3<Float>(-0.7071067, 6.600024e-14, 6.600024e-14))
▿ vector : SIMD4<Float>(-0.7071067, 6.600024e-14, 6.600024e-14, 0.7071068)
If anyone has better intuition than me about the coordinated spaces involved, I would appreciate some help. Thanks!
The Location3D that is returned by a SpatialTapGesture does not return normal vector information. This can make it difficult to orient an object that's placed at that location.
Am I misusing this gesture or is this indeed the case?
As an alternative I was thinking I could manually raycast toward the location the user tapped, but to do that, I need two points. One of those points needs to be the location of the device / user's head in world space and I'm not familiar how to get that information.
Has anyone achieved something like this?
On Xcode 15.1.0b2 when rayacsting to a collision surface, there appears to be a tendency for the collisions to be inconsistent.
Here are my results. Green cylinders are hits, and red cylinders are raycasts that returned no collision results.
NOTE: This raycast is triggered by a tap gesture recognizer registering on the cube... so it's weird to me that the tap would work, but the raycast not collide with anything.
Is this something that just performs poorly in the simulator?
My RayCasting command is:
guard let pose = self.arSessionController.worldTracking.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else {
print("FAILED TO GET POSITION")
return
}
let transform = Transform(matrix: pose.originFromAnchorTransform)
let locationOfDevice = transform.translation
let raycastResult = scene.raycast(from: locationOfDevice, to: destination, relativeTo: nil)
where destination is retrieved in a tap gesture handler via:
let worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene)
Any findings would be appreciated.
Is it possible to edit a SwiftData document in an immersive scene? If so... how?
At the moment I see that the modelContext is available in the contentView of a documentGroup, but can Document Data be made available to an Immserive scene's content?
Hello,
I’ve got a few questions about drag gestures on VisionOS in Immersive scenes.
Once a user initiates a drag gesture are their eyes involved anymore in the gesture?
If not and the user is dragging something farther away, how far can they move it using indirect gestures? I assume the user’s range of motion is limited because their hands are in their lap, so could they move something multiple meters along a distant wall?
How can the user cancel the gesture If they don’t like the anticipated / telegraphed result?
I’m trying to craft a good experience and it’s difficult without some of these details. I have still not heard back on my devkit application.
Thank you for any help.
I want to place a ModelEntity at an AnchorEntity's location, but not as a child of the AnchorEntity. ( I want to be able to raycast to it, and have collisions work.)
I've placed an AnchorEntity in my scene like so:
AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous)
In my RealityView update closure, I print out this entity's position relative to "nil" like so:
wallAnchor.position(relativeTo: nil)
Unfortunately, this position doesn't make sense. It's very close to zero, even though it appears several meters away.
I believe this is because AnchorEntities have their own self contained coordinate spaces that are independent from the scene's coordinate space, and it is reporting its position relative to its own coordinate space.
How can I bridge the gap between these two?
WorldAnchor has an originFromAnchorTransform property that helps with this, but I'm not seeing something similar for AnchorEntity.
Thank you