Post

Replies

Boosts

Views

Activity

Unable to launch + Attach to Mac Widget (Sonoma + Xcode 15)
Attempting to launch a widget in Debug mode on Sonoma from Xcode 15 is failing with the following message: attach failed (Not allowed to attach to process. Look in the console messages (Console.app), near the debugserver entries, when the attach failed. The subsystem that denied the attach permission will likely have logged an informative message about why it was denied.) Looking in console I see this message: macOSTaskPolicy: (com.apple.debugserver) may not get the task control port of (MacGalleryWidget) (pid: 1851): (MacGalleryWidget) is hardened, (MacGalleryWidget) doesn't have get-task-allow, (com.apple.debugserver) is a declared debugger(com.apple.debugserver) is not a declared read-only debugger What Xcode settings should I be looking at to rectify this? I suspect I may have something that's out of whack.
1
4
1.1k
May ’25
Placing Item on plane, position is great, but trouble with rotation
The Goal My goal is to place an item where the user taps on a plane, and have that item match the outward facing normal-vector where the user tapped. In beta 3 a 3D Spatial Tap Gesture now returns an accurate Location3D, so determining the position to place an item is working great. I simply do: let worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene) The Problem Now, I notice that my entities aren't oriented correctly: The placed item always 'faces' the camera. So if the camera isn't looking straight on the target plane, then the orientation of the new entity is off. If I retrieve the transform of my newly placed item it says the rotation relative to 'nil' is 0,0,0, which.... doesn't look correct? I know I'm dealing with different Coordinate systems of the plane being tapped, the world coordinate system, and the item being placed and I'm getting a bit lost in it all. Not to mention my API intuition is still pretty low, so quats are still new to me. So, I'm curious, what rotation information can I use to "correct" the placed entity's orientation? What I tried: I've tried investigating the tap-target-entity like so: let rotationRelativeToWorld = value.entity.convert(transform: value.entity.transform, to: nil).rotation I believe this returns the rotation of the "plane entity" the user tapped, relative to the world. While that get's me the following, I'm not sure if it's useful? rotationRelativeToWorld: ▿ simd_quatf(real: 0.7071068, imag: SIMD3<Float>(-0.7071067, 6.600024e-14, 6.600024e-14)) ▿ vector : SIMD4<Float>(-0.7071067, 6.600024e-14, 6.600024e-14, 0.7071068) If anyone has better intuition than me about the coordinated spaces involved, I would appreciate some help. Thanks!
1
0
1.4k
Oct ’23
Retrieve Normal Vector of Tap location? RayCast from device(head)?
The Location3D that is returned by a SpatialTapGesture does not return normal vector information. This can make it difficult to orient an object that's placed at that location. Am I misusing this gesture or is this indeed the case? As an alternative I was thinking I could manually raycast toward the location the user tapped, but to do that, I need two points. One of those points needs to be the location of the device / user's head in world space and I'm not familiar how to get that information. Has anyone achieved something like this?
1
0
1.3k
Nov ’23
SceneReconstruction alongside WorldTracking silently fails?
Hello, I've noticed that when I have my ARSession run the sceneReconstruction provider and the world tracking provider at the same time, I receive no scene reconstruction mesh updates. My catch closure doesn't receive any errors, it just doesn't send anything to the async list. If I run just the scene reconstruction provider by itself, then I do get mesh updates. Is this a bug? Is it expected that it's not possible to do this? Thank you
1
0
740
Feb ’24
How do I use RoomAnchor?
What I want to do: I want to turn only the walls of a room into RealityKit Entities that I can collide with, or turn into occlusion surfaces. This requires adding and maintaining RealityKit entities that with mesh information from the RoomAnchor. It also requires creating a "collision shape" from the mesh information. What I've explored: A RoomAnchor can provide me MeshAnchor.Geometry's that match only the "wall" portions of a Room. I can use this mesh information to create RealityKit entities and add them to my immersive view. But those Mesh's don't come with UUIDs, so I'm not sure how I could know which entities meshes need to to be updated as the RoomAnchor is updated. As such I just keep adding duplicate wall entities. A RoomAnchor also provides me with the UUIDs of its plane anchors, but no way to connect those to the provided meshes that I've discovered so far. Here is how I add the green walls from the RoomAnchor wall meshes. Note: I don't like that I need to wrap this in a task to satisfy the async nature of making a shape from a mesh. could be stuck with it, though. Warning: this code will keep adding walls, even if there are duplicates and will likely cause performance issues :D. func updateRoom(_ anchor: RoomAnchor) async throws { print("ROOM ID: \(anchor.id)") anchor.geometries(of: .wall).forEach { mesh in Task { let newEntity = Entity() newEntity.components.set(InputTargetComponent()) realityViewContent?.addEntity(newEntity) newEntity.components.set(PlacementUtilities.PlacementSurfaceComponent()) collisionEntities[anchor.id]?.components.set(OpacityComponent(opacity: 0.2)) collisionEntities[anchor.id]?.transform = Transform(matrix: anchor.originFromAnchorTransform) // Generate a mesh for the plane do { let contents = MeshResource.Contents(planeGeometry: mesh) let meshResource = try MeshResource.generate(from: contents) // Make this plane occlude virtual objects behind it. // entity.components.set(ModelComponent(mesh: meshResource, materials: [OcclusionMaterial()])) collisionEntities[anchor.id]?.components.set(ModelComponent(mesh: meshResource, materials: [SimpleMaterial.init(color: .green, roughness: 1.0, isMetallic: false)])) } catch { print("Failed to create a mesh resource for a plane anchor: \(error).") return } // Generate a collision shape for the plane (for object placement and physics). var shape: ShapeResource? = nil do { let vertices = anchor.geometry.vertices.asSIMD3(ofType: Float.self) shape = try await ShapeResource.generateStaticMesh(positions: vertices, faceIndices: anchor.geometry.faces.asUInt16Array()) } catch { print("Failed to create a static mesh for a plane anchor: \(error).") return } if let shape { let collisionGroup = PlaneAnchor.verticalCollisionGroup collisionEntities[anchor.id]?.components.set(CollisionComponent(shapes: [shape], isStatic: true, filter: CollisionFilter(group: collisionGroup, mask: .all))) // The plane needs to be a static physics body so that objects come to rest on the plane. let physicsMaterial = PhysicsMaterialResource.generate() let physics = PhysicsBodyComponent(shapes: [shape], mass: 0.0, material: physicsMaterial, mode: .static) collisionEntities[anchor.id]?.components.set(physics) } collisionEntities[anchor.id]?.components.set(InputTargetComponent()) } } }
1
0
923
Jun ’24
Questions about WorldTrackedAnchor Resiliency
Background: The app that I am working on lets the user place things in their surroundings and recovers those placements the next time their enter the immersive scene. From the documentation and discussions I have had, World Tracked Anchors are local to the device. My questions are: What happens to these anchors when the user updates their device to the next generation? What happens to these anchors if the user gets an Apple Care replacement? Are they backed up and restored via iCloud? If not, I filed a feedback about it a few months back :D FB13613066
1
0
886
Jun ’24
[NewbQs] Is this possible with AppIntentDomains?
As a user, when viewing a photo or image, I want to be able to tell Siri, “add this to ”, similar to example from the WWDC presentation where a photo is added to a note in the notes app. Is this... possible with app domains as they are documented? I see domains like open-file and open-photo, but I don't know if those are appropriate for this kind of functionality?
1
0
588
Sep ’24
Page-Curl Shader -- Pixel transparency check is wrong?
Given I do not understand much at all about how to write shaders I do not understand the math associated with page-curl effects I am trying to: implement a page-curl shader for use on SwiftUI views. I've lifted a shader from HIROKI IKEUCHI that I believe they lifted from a non-metal shader resource online, and I'm trying to digest it. One thing I want to do is to paint the "underside" of the view with a given color and maintain the transparency of rounded corners when they are flipped over. So, if an underside pixel is "clear" then I want to sample the pixel at that position on the original layer instead of the "curl effect" pixel. There are two comments in the shader below where I check the alpha, and underside flags, and paint the color red as a debug test. The shader gives this result: The outside of those rounded corners is appropriately red and the white border pixels are detected as "not-clear". But the "inner" portion of the border is... mistakingly red? I don't get it. Any help would be appreciated. I feel tapped out and I don't have any IRL resources I can ask. // // PageCurl.metal // ShaderDemo3 // // Created by HIROKI IKEUCHI on 2023/10/17. // #include <metal_stdlib> #include <SwiftUI/SwiftUI_Metal.h> using namespace metal; #define pi float(3.14159265359) #define blue half4(0.0, 0.0, 1.0, 1.0) #define red half4(1.0, 0.0, 0.0, 1.0) #define radius float(0.4) // そのピクセルの色を返す [[ stitchable ]] half4 pageCurl ( float2 _position, SwiftUI::Layer layer, float4 bounds, float2 _clickedPoint, float2 _mouseCursor ) { half4 undersideColor = half4(0.5, 0.5, 1.0, 1.0); float2 originalPosition = _position; // y座標の補正 float2 position = float2(_position.x, bounds.w - _position.y); float2 clickedPoint = float2(_clickedPoint.x, bounds.w - _clickedPoint.y); float2 mouseCursor = float2(_mouseCursor.x, bounds.w - _mouseCursor.y); float aspect = bounds.z / bounds.w; float2 uv = position * float2(aspect, 1.) / bounds.zw; float2 mouse = mouseCursor.xy * float2(aspect, 1.) / bounds.zw; float2 mouseDir = normalize(abs(clickedPoint.xy) - mouseCursor.xy); float2 origin = clamp(mouse - mouseDir * mouse.x / mouseDir.x, 0., 1.); float mouseDist = clamp(length(mouse - origin) + (aspect - (abs(clickedPoint.x) / bounds.z) * aspect) / mouseDir.x, 0., aspect / mouseDir.x); if (mouseDir.x < 0.) { mouseDist = distance(mouse, origin); } float proj = dot(uv - origin, mouseDir); float dist = proj - mouseDist; float2 linePoint = uv - dist * mouseDir; half4 pixel = layer.sample(position); if (dist > radius) { pixel = half4(0.0, 0.0, 0.0, 0.0); // background behind curling layer (note: 0.0 opacity) pixel.rgb *= pow(clamp(dist - radius, 0., 1.) * 1.5, .2); } else if (dist >= 0.0) { // THIS PORTION HANDLES THE CURL SHADED PORTION OF THE RESULT // map to cylinder point float theta = asin(dist / radius); float2 p2 = linePoint + mouseDir * (pi - theta) * radius; float2 p1 = linePoint + mouseDir * theta * radius; bool underside = (p2.x <= aspect && p2.y <= 1. && p2.x > 0. && p2.y > 0.); uv = underside ? p2 : p1; uv = float2(uv.x, 1.0 - uv.y); // invert y pixel = layer.sample(uv * float2(1. / aspect, 1.) * float2(bounds[2], bounds[3])); // ME<---- if (underside && pixel.a == 0.0) { //<---- PIXEL.A IS 0.0 WHYYYYY pixel = red; } // Commented out while debugging alpha issues // if (underside && pixel.a == 0.0) { // pixel = layer.sample(originalPosition); // } else if (underside) { // pixel = undersideColor; // underside // } // Shadow the pixel being returned pixel.rgb *= pow(clamp((radius - dist) / radius, 0., 1.), .2); } else { // THIS PORTION HANDLES THE NON-CURL-SHADED PORTION OF THE SAMPLING. float2 p = linePoint + mouseDir * (abs(dist) + pi * radius); bool underside = (p.x <= aspect && p.y <= 1. && p.x > 0. && p.y > 0.); uv = underside ? p : uv; uv = float2(uv.x, 1.0 - uv.y); // invert y pixel = layer.sample(uv * float2(1. / aspect, 1.) * float2(bounds[2], bounds[3])); // ME if (underside && pixel.a == 0.0) { //<---- PIXEL.A IS 0.0 WHYYYYY pixel = red; } // Commented out while debugging alpha issues // if (underside && pixel.a == 0.0) { // // If the new underside pixel is clear, we should sample the original image's pixel. // pixel = layer.sample(originalPosition); // } else if (underside) { // pixel = undersideColor; // } } return pixel; }
1
0
735
Oct ’24