While PlaneAnchors are still not generated by the PlaneDetectionProvider in the simulator, I am still brainstorming how to detect a tap on one of the planes.
In an iOS ARKit application I could use a raycastQuery on existingPlaneGeometry to make an anchor with the raycast result's world transform.
I've not yet found the VisionOS replacement for this.
A possible hunch is that I need to install my own mesh-less PlaneModelEntities for each planeAnchor that's returned by the PlaneDetectionProvider. From there I can use a TapGesture targeted to those models? And then I could build a an WorldAnchor from the tap location on those entities.
Anyone have any ideas?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
With the advent of widget interactivity, in order to support state management, I'd like to differentiate one widget from another, even if they share the same configuration.
Is this possible? Many of my search results are turning up iOS 15 era information, and I am not sure if that's still valid.
Thank you
Attempting to launch a widget in Debug mode on Sonoma from Xcode 15 is failing with the following message:
attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.)
Looking in console I see this message:
macOSTaskPolicy: (com.apple.debugserver) may not get the task control port of (MacGalleryWidget) (pid: 1851): (MacGalleryWidget) is hardened, (MacGalleryWidget) doesn't have get-task-allow, (com.apple.debugserver) is a declared debugger(com.apple.debugserver) is not a declared read-only debugger
What Xcode settings should I be looking at to rectify this? I suspect I may have something that's out of whack.
The Goal
My goal is to place an item where the user taps on a plane, and have that item match the outward facing normal-vector where the user tapped.
In beta 3 a 3D Spatial Tap Gesture now returns an accurate Location3D, so determining the position to place an item is working great. I simply do:
let worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene)
The Problem
Now, I notice that my entities aren't oriented correctly:
The placed item always 'faces' the camera. So if the camera isn't looking straight on the target plane, then the orientation of the new entity is off.
If I retrieve the transform of my newly placed item it says the rotation relative to 'nil' is 0,0,0, which.... doesn't look correct?
I know I'm dealing with different Coordinate systems of the plane being tapped, the world coordinate system, and the item being placed and I'm getting a bit lost in it all. Not to mention my API intuition is still pretty low, so quats are still new to me.
So, I'm curious, what rotation information can I use to "correct" the placed entity's orientation?
What I tried:
I've tried investigating the tap-target-entity like so:
let rotationRelativeToWorld = value.entity.convert(transform: value.entity.transform, to: nil).rotation
I believe this returns the rotation of the "plane entity" the user tapped, relative to the world.
While that get's me the following, I'm not sure if it's useful?
rotationRelativeToWorld:
▿ simd_quatf(real: 0.7071068, imag: SIMD3<Float>(-0.7071067, 6.600024e-14, 6.600024e-14))
▿ vector : SIMD4<Float>(-0.7071067, 6.600024e-14, 6.600024e-14, 0.7071068)
If anyone has better intuition than me about the coordinated spaces involved, I would appreciate some help. Thanks!
The Location3D that is returned by a SpatialTapGesture does not return normal vector information. This can make it difficult to orient an object that's placed at that location.
Am I misusing this gesture or is this indeed the case?
As an alternative I was thinking I could manually raycast toward the location the user tapped, but to do that, I need two points. One of those points needs to be the location of the device / user's head in world space and I'm not familiar how to get that information.
Has anyone achieved something like this?
Is it possible to edit a SwiftData document in an immersive scene? If so... how?
At the moment I see that the modelContext is available in the contentView of a documentGroup, but can Document Data be made available to an Immserive scene's content?
My Xcode is only showing me the "Designed for iPad" Vision Pro archive destination despite having the native VisionPro destination chosen in my target settings.
Any thoughts on how to fix this?
How persistent is the storage of the WorldTrackingProvider and its underlying world map reconstruction?
The documentation mentions town-to-town anchor recovery, and recovery between sessions, but is that including device restarts and app quits? There are no clues about how persistent it all is.
Hello,
I've noticed that when I have my ARSession run the sceneReconstruction provider and the world tracking provider at the same time, I receive no scene reconstruction mesh updates. My catch closure doesn't receive any errors, it just doesn't send anything to the async list.
If I run just the scene reconstruction provider by itself, then I do get mesh updates.
Is this a bug? Is it expected that it's not possible to do this?
Thank you
What I want to do:
I want to turn only the walls of a room into RealityKit Entities that I can collide with, or turn into occlusion surfaces.
This requires adding and maintaining RealityKit entities that with mesh information from the RoomAnchor. It also requires creating a "collision shape" from the mesh information.
What I've explored:
A RoomAnchor can provide me MeshAnchor.Geometry's that match only the "wall" portions of a Room.
I can use this mesh information to create RealityKit entities and add them to my immersive view.
But those Mesh's don't come with UUIDs, so I'm not sure how I could know which entities meshes need to to be updated as the RoomAnchor is updated.
As such I just keep adding duplicate wall entities.
A RoomAnchor also provides me with the UUIDs of its plane anchors, but no way to connect those to the provided meshes that I've discovered so far.
Here is how I add the green walls from the RoomAnchor wall meshes.
Note: I don't like that I need to wrap this in a task to satisfy the async nature of making a shape from a mesh. could be stuck with it, though.
Warning: this code will keep adding walls, even if there are duplicates and will likely cause performance issues :D.
func updateRoom(_ anchor: RoomAnchor) async throws {
print("ROOM ID: \(anchor.id)")
anchor.geometries(of: .wall).forEach { mesh in
Task {
let newEntity = Entity()
newEntity.components.set(InputTargetComponent())
realityViewContent?.addEntity(newEntity)
newEntity.components.set(PlacementUtilities.PlacementSurfaceComponent())
collisionEntities[anchor.id]?.components.set(OpacityComponent(opacity: 0.2))
collisionEntities[anchor.id]?.transform = Transform(matrix: anchor.originFromAnchorTransform)
// Generate a mesh for the plane
do {
let contents = MeshResource.Contents(planeGeometry: mesh)
let meshResource = try MeshResource.generate(from: contents)
// Make this plane occlude virtual objects behind it.
// entity.components.set(ModelComponent(mesh: meshResource, materials: [OcclusionMaterial()]))
collisionEntities[anchor.id]?.components.set(ModelComponent(mesh: meshResource, materials: [SimpleMaterial.init(color: .green, roughness: 1.0, isMetallic: false)]))
} catch {
print("Failed to create a mesh resource for a plane anchor: \(error).")
return
}
// Generate a collision shape for the plane (for object placement and physics).
var shape: ShapeResource? = nil
do {
let vertices = anchor.geometry.vertices.asSIMD3(ofType: Float.self)
shape = try await ShapeResource.generateStaticMesh(positions: vertices,
faceIndices: anchor.geometry.faces.asUInt16Array())
} catch {
print("Failed to create a static mesh for a plane anchor: \(error).")
return
}
if let shape {
let collisionGroup = PlaneAnchor.verticalCollisionGroup
collisionEntities[anchor.id]?.components.set(CollisionComponent(shapes: [shape], isStatic: true,
filter: CollisionFilter(group: collisionGroup, mask: .all)))
// The plane needs to be a static physics body so that objects come to rest on the plane.
let physicsMaterial = PhysicsMaterialResource.generate()
let physics = PhysicsBodyComponent(shapes: [shape], mass: 0.0, material: physicsMaterial, mode: .static)
collisionEntities[anchor.id]?.components.set(physics)
}
collisionEntities[anchor.id]?.components.set(InputTargetComponent())
}
}
}
Background:
The app that I am working on lets the user place things in their surroundings and recovers those placements the next time their enter the immersive scene.
From the documentation and discussions I have had, World Tracked Anchors are local to the device.
My questions are:
What happens to these anchors when the user updates their device to the next generation?
What happens to these anchors if the user gets an Apple Care replacement?
Are they backed up and restored via iCloud?
If not, I filed a feedback about it a few months back :D
FB13613066
Is there selection capabilities built into the new container APIs?
I would like to ensure that I can spawn a context menu for multiple selected items in my custom-layout container.
Topic:
UI Frameworks
SubTopic:
SwiftUI
If one would like to perform network debugging that involves toggling wireless on and off while still remaining connected to the debugger, how can one disable wireless connections of their devices to Xcode?
I want the result of an "If greater" node to return a boolean, but the best I can seem to get is a float of 0.00, or 1.00. I then can't seem to convert these to a boolean so I can use the "AND" node.
Am I holding this wrong?