Post

Replies

Boosts

Views

Activity

Disable occlusion for collaboration debugging?
Hello, I’m noticing that during a collaborative session anchors created on the host device are appearing in a different location on client devices and it’s making it challenging to test other collaboration logic. For example, when the client places a textured plane mesh at an anchor placed by the host, this placement can sometimes be considered as behind a surface and it gets clipped by RealityKits mesh occlusion. I’d prefer if I could see it floating in space when testing so I can see that something is happening. Im drawing a blank on if there are any debugging options to help me out. Nothing in render or debug options jumped out at me. Thoughts?
1
0
1.2k
Feb ’22
ARView "showAnchorGeometry" interpretation help?
When using the showAnchorGeometry I see lots of green surface anchors in my scene and it has been really helpful for debugging the placement of objects when I tap the screen of my device. But I also get some blue shapes too, and I'm not quite sure what those mean... Is there a document that explains what showAnchorGeometry is actually... showing? Googling "showAnchorGeometry blue" wasn't helpful! (I promise I tried)
1
0
1.4k
Dec ’22
Consistent spacing on a grid of ContainerRelativeShapes?
Hello, I'm trying to make a grid of container-relative shapes, where outside gutters match the gutters in between the items. This stickiest part of this problem is the fact that calling .inset on a ContainerRelativeShape doubles the gutter in between the items. I've tried LazyVGrid, and an HStack of VStacks, and they all have this double gutter in between. I think I could move forward with some gnarly frame math, but I was curious if I'm missing some SwiftUI layout feature that could make this easier and more maintainable.
1
0
1k
Apr ’22
AppIntent with Input from share sheet or previous step?
I have the following parameter:     @Parameter(title: "Image", description: "Image to copy", supportedTypeIdentifiers: ["com.image"], inputConnectionBehavior: .connectToPreviousIntentResult)     var imageFile: IntentFile? When I drop my AppIntent into a shortcut, though, I am unable to connect this parameter to the output of the previous step. Given the documentation I have no idea how to achieve this, if the above, is not the correct way to do so.
2
0
1.8k
Aug ’22
Opening a SwiftUI app window to a particular place via intent?
It is possible via the new AppIntents framework to open your app from via a shortcuts intent, but I am currently very confused about how to ensure that a particular window is opened in a SwiftUI-runtime app. If the use says "Open View A" via a shortcuts, or Siri, I'd like to make sure it opens the window for "View A", though a duplicate window could be acceptable too. The WWDC22 presentation has the following: @MainActor func perform() async throws -> some IntentResult { Navigator.shared.openShelf(.currentlyReading) return .result() } Where, from the perform method of the Intent structure, they tell an arbitrary Navigator (code not provided) to just open a view of the app. (How convenient!) But for a multiwindow swiftUI app, I'm not sure how to make this work. @Environment variables are not available within the Intent struct, and even if I did have a "Navigator Singleton", I'm not sure how it could get the @Environment for openWindow since it's a View environment. AppIntents exist outside the View environment tree AFAIK. Any Ideas? I'd be a little shocked if this is a UIKit only sort of thing, but at the same time... ya never know.
2
0
2.6k
Aug ’22
Debugging Max Volume size - GeometryReader3D units?
Hello, I'm curious if anyone has some useful debug tools for out-of-bounds issues with Volumes. I am opening a volume with a size of 1m, 1m, 10cm. I am adding a RealityView with a ModelEntity that is 0.5m tall and I am seeing the model clip at the top and bottom. I find this odd, because I feel like it should be within the size of the Volume.... I was curious what size SwiftUI says the Volume's size is so I tried using a GeometryReader3D to tell me.... GeometryReader3D { proxy in VStack { Text("\(proxy.size.width)") Text("\(proxy.size.height)") Text("\(proxy.size.depth)") } .padding().glassBackgroundEffect() } Unfortunately I get, 680, 1360, and 68. I'm guessing these units are in points, but that's not very helpful. The documentation says to use real-world units for Volumes, but none of the SwiftUI frame setters and getters appear to support different units. Is there a way to convert between the two? I'm not clear if this is a bug or a feature suggestion.
1
0
882
Oct ’23
[Newbie] Why does my ShaderGraphMaterial appear distorted?
Disclaimer: I am new to all things 3D. There could be a variety of things wrong with what I'm doing that are not unique to RealityKit. Any domain info would be appreciated. So, I'm following, what I think are, the recommended steps to import a shader-node material from reality composer pro and apply it to another modelEntity. I do the following: guard let entity = try? Entity.load(named: "Materials", in: RealityKitContent.realityKitContentBundle) else { return model } let materialEntity = entity.findEntity(named: "materialModel") as? ModelEntity guard let materialEntity else { return model } I then configure a property on it like so: guard var material = materialEntity.model?.materials[0] as? ShaderGraphMaterial else { return model } try coreMaterial.setParameter(name: "BaseColor", value: .color(matModel.matCoreUIColor)) I then apply it. This is what my texture looks like in RealityComposer: I notice that my rendered object has distortions in the actual RealityView. Note the diagonal lines that appear "Stretched". What could be doing this? I thought Node Shaders were supposed to be more resilient to distortions like this? I'm not sure if I've got a bug or if I'm using it wrong. FWIW, this is a shader based on apple's felt material shader. My graph looks like this: Thanks
2
0
1.1k
Aug ’23
Detecting Tap Location on detected "PlaneAnchor"? Replacement for Raycast?
While PlaneAnchors are still not generated by the PlaneDetectionProvider in the simulator, I am still brainstorming how to detect a tap on one of the planes. In an iOS ARKit application I could use a raycastQuery on existingPlaneGeometry to make an anchor with the raycast result's world transform. I've not yet found the VisionOS replacement for this. A possible hunch is that I need to install my own mesh-less PlaneModelEntities for each planeAnchor that's returned by the PlaneDetectionProvider. From there I can use a TapGesture targeted to those models? And then I could build a an WorldAnchor from the tap location on those entities. Anyone have any ideas?
1
0
1.1k
Aug ’23
Why does this entity appear behind spatial tap collision location?
I am trying to make a world anchor where a user taps a detected plane. How am I trying this? First, I add an entity to a RealityView like so: let anchor = AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous) anchor.transform.rotation *= simd_quatf(angle: -.pi / 2, axis: SIMD3<Float>(1, 0, 0)) let interactionEntity = Entity() interactionEntity.name = "PLANE" let collisionComponent = CollisionComponent(shapes: [ShapeResource.generateBox(width: 2.0, height: 2.0, depth: 0.02)]) interactionEntity.components.set(collisionComponent) interactionEntity.components.set(InputTargetComponent()) anchor.addChild(interactionEntity) content.add(anchor) This: Declares an anchor that requires a wall 2 meters by 2 meters to appear in the scene with continuous tracking Makes an empty entity and gives it a 2m by 2m by 2cm collision box Attaches the collision entity to the anchor Finally then adds the anchor to the scene It appears in the scene like this: Great! Appears to sit right on the wall. I then add a tap gesture recognizer like this: SpatialTapGesture() .targetedToAnyEntity() .onEnded { value in guard value.entity.name == "PLANE" else { return } var worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene) let pose = Pose3D(position: worldPosition, rotation: value.entity.transform.rotation) let worldAnchor = WorldAnchor(transform: simd_float4x4(pose)) let model = ModelEntity(mesh: .generateBox(size: 0.1, cornerRadius: 0.03), materials: [SimpleMaterial(color: .blue, isMetallic: true)]) model.transform = Transform(matrix: worldAnchor.transform) realityViewContent?.add(model) I ASSUME This: Makes a world position from the where the tap connects with the collision entity. Integrates the position and the collision plane's rotation to create a Pose3D. Makes a world anchor from that pose (So it can be persisted in a world tracking provider) Then I make a basic cube entity and give it that transform. Weird Stuff: It doesn't appear on the plane.. it appears behind it... Why, What have I done wrong? The X and Y of the tap location appears spot on, but something is "off" about the z position. Also, is there a recommended way to debug this with the available tools? I'm guessing I'll have to file a DTS about this because feedback on the forum has been pretty low since labs started.
2
0
1.5k
Oct ’23
Placing Item on plane, position is great, but trouble with rotation
The Goal My goal is to place an item where the user taps on a plane, and have that item match the outward facing normal-vector where the user tapped. In beta 3 a 3D Spatial Tap Gesture now returns an accurate Location3D, so determining the position to place an item is working great. I simply do: let worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene) The Problem Now, I notice that my entities aren't oriented correctly: The placed item always 'faces' the camera. So if the camera isn't looking straight on the target plane, then the orientation of the new entity is off. If I retrieve the transform of my newly placed item it says the rotation relative to 'nil' is 0,0,0, which.... doesn't look correct? I know I'm dealing with different Coordinate systems of the plane being tapped, the world coordinate system, and the item being placed and I'm getting a bit lost in it all. Not to mention my API intuition is still pretty low, so quats are still new to me. So, I'm curious, what rotation information can I use to "correct" the placed entity's orientation? What I tried: I've tried investigating the tap-target-entity like so: let rotationRelativeToWorld = value.entity.convert(transform: value.entity.transform, to: nil).rotation I believe this returns the rotation of the "plane entity" the user tapped, relative to the world. While that get's me the following, I'm not sure if it's useful? rotationRelativeToWorld: ▿ simd_quatf(real: 0.7071068, imag: SIMD3<Float>(-0.7071067, 6.600024e-14, 6.600024e-14)) ▿ vector : SIMD4<Float>(-0.7071067, 6.600024e-14, 6.600024e-14, 0.7071068) If anyone has better intuition than me about the coordinated spaces involved, I would appreciate some help. Thanks!
1
0
1.4k
Oct ’23
Retrieve Normal Vector of Tap location? RayCast from device(head)?
The Location3D that is returned by a SpatialTapGesture does not return normal vector information. This can make it difficult to orient an object that's placed at that location. Am I misusing this gesture or is this indeed the case? As an alternative I was thinking I could manually raycast toward the location the user tapped, but to do that, I need two points. One of those points needs to be the location of the device / user's head in world space and I'm not familiar how to get that information. Has anyone achieved something like this?
1
0
1.3k
Nov ’23
View count of open SecurityScoped Resources?
Hello, I'm trying to determine if my application is not releasing all security scoped resources and I'm curious if there's a way to view the count of all currently accessed URLs. I am balancing all startAccessingSecurityScopedResource calls that return true with a stopAccessingSecurityScopedResource, but sometimes my application is unresponsive when my mac wakes from sleep. Console logs indicate some Sandboxing issues. Unresponsiveness is resolved by a force-quit and restart of the application. I'd like to try and observe what's going on with the number of Security Scoped resources to get to the bottom of this. Is it possible?
2
0
658
May ’24
How do I use RoomAnchor?
What I want to do: I want to turn only the walls of a room into RealityKit Entities that I can collide with, or turn into occlusion surfaces. This requires adding and maintaining RealityKit entities that with mesh information from the RoomAnchor. It also requires creating a "collision shape" from the mesh information. What I've explored: A RoomAnchor can provide me MeshAnchor.Geometry's that match only the "wall" portions of a Room. I can use this mesh information to create RealityKit entities and add them to my immersive view. But those Mesh's don't come with UUIDs, so I'm not sure how I could know which entities meshes need to to be updated as the RoomAnchor is updated. As such I just keep adding duplicate wall entities. A RoomAnchor also provides me with the UUIDs of its plane anchors, but no way to connect those to the provided meshes that I've discovered so far. Here is how I add the green walls from the RoomAnchor wall meshes. Note: I don't like that I need to wrap this in a task to satisfy the async nature of making a shape from a mesh. could be stuck with it, though. Warning: this code will keep adding walls, even if there are duplicates and will likely cause performance issues :D. func updateRoom(_ anchor: RoomAnchor) async throws { print("ROOM ID: \(anchor.id)") anchor.geometries(of: .wall).forEach { mesh in Task { let newEntity = Entity() newEntity.components.set(InputTargetComponent()) realityViewContent?.addEntity(newEntity) newEntity.components.set(PlacementUtilities.PlacementSurfaceComponent()) collisionEntities[anchor.id]?.components.set(OpacityComponent(opacity: 0.2)) collisionEntities[anchor.id]?.transform = Transform(matrix: anchor.originFromAnchorTransform) // Generate a mesh for the plane do { let contents = MeshResource.Contents(planeGeometry: mesh) let meshResource = try MeshResource.generate(from: contents) // Make this plane occlude virtual objects behind it. // entity.components.set(ModelComponent(mesh: meshResource, materials: [OcclusionMaterial()])) collisionEntities[anchor.id]?.components.set(ModelComponent(mesh: meshResource, materials: [SimpleMaterial.init(color: .green, roughness: 1.0, isMetallic: false)])) } catch { print("Failed to create a mesh resource for a plane anchor: \(error).") return } // Generate a collision shape for the plane (for object placement and physics). var shape: ShapeResource? = nil do { let vertices = anchor.geometry.vertices.asSIMD3(ofType: Float.self) shape = try await ShapeResource.generateStaticMesh(positions: vertices, faceIndices: anchor.geometry.faces.asUInt16Array()) } catch { print("Failed to create a static mesh for a plane anchor: \(error).") return } if let shape { let collisionGroup = PlaneAnchor.verticalCollisionGroup collisionEntities[anchor.id]?.components.set(CollisionComponent(shapes: [shape], isStatic: true, filter: CollisionFilter(group: collisionGroup, mask: .all))) // The plane needs to be a static physics body so that objects come to rest on the plane. let physicsMaterial = PhysicsMaterialResource.generate() let physics = PhysicsBodyComponent(shapes: [shape], mass: 0.0, material: physicsMaterial, mode: .static) collisionEntities[anchor.id]?.components.set(physics) } collisionEntities[anchor.id]?.components.set(InputTargetComponent()) } } }
1
0
920
Jun ’24
Questions about WorldTrackedAnchor Resiliency
Background: The app that I am working on lets the user place things in their surroundings and recovers those placements the next time their enter the immersive scene. From the documentation and discussions I have had, World Tracked Anchors are local to the device. My questions are: What happens to these anchors when the user updates their device to the next generation? What happens to these anchors if the user gets an Apple Care replacement? Are they backed up and restored via iCloud? If not, I filed a feedback about it a few months back :D FB13613066
1
0
886
Jun ’24