Post

Replies

Boosts

Views

Activity

How to opt out of tinting content in visionOS widgets
During the WWDC Session called "Design widgets for visionOS" the presenter says: You can choose whether the background of your widget participates in tinting. If you opted out, for example to preserve a photo or illustration, make sure it still looks good alongside the selected color palette. https://developer.apple.com/videos/play/wwdc2025/255 Unfortunately, this session has no example code. Can someone point me to the correct way to do this? Is there a modifier we can use on views? When a user selects one the tint colors using the configuration screen, we would like to prevent some views from being tinted.
0
0
84
Jul ’25
How to combine Occlusion nodes with soft edges.
At a recent community meeting we were wondering how Apple creates this soft-edge effect around the occlusion cutouts. We see this effect on keyboard cutouts, iPhone cutouts, and in progressive spaces. An example: Notice the soft edged around the occlusion cutout for the keyboard One of our members created some Shader Graph materials to explore soft edges. These work by sending data into the opacity channel of the PreviewSurface node. Unfortunately, the Occlusion Surface nodes lack any sort of input. If you know how to blend these concepts with RealityKit Occlusion, please let us know!
0
1
891
Sep ’25
Values for SIMD3 and SIMD2 not showing up in Reality Composer Pro
Reality Composer Pro question related to custom components My custom component defines some properties to edit in RCP. Simple ones work find, but SIMD3 and SIMD2 do not. I'd expect to see default values but instead I get this 0s. If I try to run this the scene doesn't load. Once I enter some values it does and build and run again it works fine. More generally, does Apple have documentation on creating properties for components? The only examples I've seen show simple strings and floats. There are no details about vectors, conditional options, grouping properties, etc. public struct EntitySpawnerComponent: Component, Codable { public enum SpawnShape: String, Codable { case domeUpper case domeLower case sphere case box case plane case circle } // These prooerties get their default values in RCP /// The number of clones to create public var Copies: Int = 12 /// The shape to spawn entities in public var SpawnShape: SpawnShape = .domeUpper /// Radius for spherical shapes (dome, sphere, circle) public var Radius: Float = 5.0 // These properties DO NOT get their default values in RCP. The all show 0 /// Dimensions for box spawning (width, height, depth) public var BoxDimensions: SIMD3<Float> = SIMD3(2.0, 2.0, 2.0) /// Dimensions for plane spawning (width, depth) public var PlaneDimensions: SIMD2<Float> = SIMD2(2.0, 2.0) /// Track if we've already spawned copies public var HasSpawned: Bool = false public init() { } }
1
0
529
Dec ’24
DragGesture that pivots with the user in visionOS
Apple published a set of examples for using system gestures to interact with RealityKit entities. I've been using DragGesture a lot in my apps and noticed an issue when using it in an immersive space. When dragging an entity, if I turn my body to face another direction, the dragged entity does not stay relative to my hand. This can lead to situations where the entity is pulled very close to me, or pushed far way, or even ends up behind me. In the examples linked above, there are two versions of how they use drag. handleFixedDrag: This is similar to what I'm doing now. It uses the value from value.gestureValue.translation3D as the basis for the drag handlePivotDrag: This version aims to solve the problem I described above by using value.inputDevicePose3D as the basis of the gesture. I've tried the example from handlePivotDrag, but it has one limitation. Using this version, I can move the entity around me as if it were on the inside of an arc or sphere. However, I can no longer move the entity further or closer. It stays within a similar (though not exact) distance relative to me while I drag. Is there a way to combine these concepts? Ideally, I would like to use a gesture that behaves the same way that visionOS windows do. When we drag windows, I can move them around relative to myself, pull them closer, push them further, all while avoiding the issues described above. Example from handleFixedDrag mutating private func handleFixedDrag(value: EntityTargetValue<DragGesture.Value>) { let state = EntityGestureState.shared guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") } if !state.isDragging { state.isDragging = true state.dragStartPosition = entity.scenePosition } let translation3D = value.convert(value.gestureValue.translation3D, from: .local, to: .scene) let offset = SIMD3<Float>(x: Float(translation3D.x), y: Float(translation3D.y), z: Float(translation3D.z)) entity.scenePosition = state.dragStartPosition + offset if let initialOrientation = state.initialOrientation { state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil) } } Example from handlePivotDrag mutating private func handlePivotDrag(value: EntityTargetValue<DragGesture.Value>) { let state = EntityGestureState.shared guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") } // The transform that the pivot will be moved to. var targetPivotTransform = Transform() // Set the target pivot transform depending on the input source. if let inputDevicePose = value.inputDevicePose3D { // If there is an input device pose, use it for positioning and rotating the pivot. targetPivotTransform.scale = .one targetPivotTransform.translation = value.convert(inputDevicePose.position, from: .local, to: .scene) targetPivotTransform.rotation = value.convert(AffineTransform3D(rotation: inputDevicePose.rotation), from: .local, to: .scene).rotation } else { // If there is not an input device pose, use the location of the drag for positioning the pivot. targetPivotTransform.translation = value.convert(value.location3D, from: .local, to: .scene) } if !state.isDragging { // If this drag just started, create the pivot entity. let pivotEntity = Entity() guard let parent = entity.parent else { fatalError("Non-root entity is missing a parent.") } // Add the pivot entity into the scene. parent.addChild(pivotEntity) // Move the pivot entity to the target transform. pivotEntity.move(to: targetPivotTransform, relativeTo: nil) // Add the targeted entity as a child of the pivot without changing the targeted entity's world transform. pivotEntity.addChild(entity, preservingWorldTransform: true) // Store the pivot entity. state.pivotEntity = pivotEntity // Indicate that a drag has started. state.isDragging = true } else { // If this drag is ongoing, move the pivot entity to the target transform. // The animation duration smooths the noise in the target transform across frames. state.pivotEntity?.move(to: targetPivotTransform, relativeTo: nil, duration: 0.2) } if preserveOrientationOnPivotDrag, let initialOrientation = state.initialOrientation { state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil) } }
1
0
490
Feb ’25
Can not remove final World Anchor
I’ve been having some issues removing anchors. I can add anchors with no issue. They will be there the next time I run the scene. I can also get updates when ARKit sends them. I can remove anchors, but not all the time. The method I’m using is to call removeAnchor() on the data provider. worldTracking.removeAnchor(forID: uuid) // Yes, I have also tried `removeAnchor(_ worldAnchor: WorldAnchor)` This works if there are more than one anchor in a scene. When I’m down to one remaining anchor, I can remove it. It seems to succeed (does not raise an error) but the next time I run the scene the removed anchor is back. This only happens when there is only one remaining anchor. do { // This always run, but it doesn't seem to "save" the removal when there is only one anchor left. try await worldTracking.removeAnchor(forID: uuid) } catch { // I have never seen this block fire! print("Failed to remove world anchor \(uuid) with error: \(error).") } I posted a video on my website if you want to see it happening. https://stepinto.vision/labs/lab-051-issues-with-world-tracking/ Here is the full code. Can you see if I’m doing something wrong? Is this a bug? struct Lab051: View { @State var session = ARKitSession() @State var worldTracking = WorldTrackingProvider() @State var worldAnchorEntities: [UUID: Entity] = [:] @State var placement = Entity() @State var subject : ModelEntity = { let subject = ModelEntity( mesh: .generateSphere(radius: 0.06), materials: [SimpleMaterial(color: .stepRed, isMetallic: false)]) subject.setPosition([0, 0, 0], relativeTo: nil) let collision = CollisionComponent(shapes: [.generateSphere(radius: 0.06)]) let input = InputTargetComponent() subject.components.set([collision, input]) return subject }() var body: some View { RealityView { content in guard let scene = try? await Entity(named: "WorldTracking", in: realityKitContentBundle) else { return } content.add(scene) if let placementEntity = scene.findEntity(named: "PlacementPreview") { placement = placementEntity } } update: { content in for (_, entity) in worldAnchorEntities { if !content.entities.contains(entity) { content.add(entity) } } } .modifier(DragGestureImproved()) .gesture(tapGesture) .task { try! await setupAndRunWorldTracking() } } var tapGesture: some Gesture { TapGesture() .targetedToAnyEntity() .onEnded { value in if value.entity.name == "PlacementPreview" { // If we tapped the placement preview cube, create an anchor Task { let anchor = WorldAnchor(originFromAnchorTransform: value.entity.transformMatrix(relativeTo: nil)) try await worldTracking.addAnchor(anchor) } } else { Task { // Get the UUID we stored on the entity let uuid = UUID(uuidString: value.entity.name) ?? UUID() do { try await worldTracking.removeAnchor(forID: uuid) } catch { print("Failed to remove world anchor \(uuid) with error: \(error).") } } } } } func setupAndRunWorldTracking() async throws { if WorldTrackingProvider.isSupported { do { try await session.run([worldTracking]) for await update in worldTracking.anchorUpdates { switch update.event { case .added: let subjectClone = subject.clone(recursive: true) subjectClone.isEnabled = true subjectClone.name = update.anchor.id.uuidString subjectClone.transform = Transform(matrix: update.anchor.originFromAnchorTransform) worldAnchorEntities[update.anchor.id] = subjectClone print("🟢 Anchor added \(update.anchor.id)") case .updated: guard let entity = worldAnchorEntities[update.anchor.id] else { print("No entity found to update for anchor \(update.anchor.id)") return } entity.transform = Transform(matrix: update.anchor.originFromAnchorTransform) print("🔵 Anchor updated \(update.anchor.id)") case .removed: worldAnchorEntities[update.anchor.id]?.removeFromParent() worldAnchorEntities.removeValue(forKey: update.anchor.id) print("🔴 Anchor removed \(update.anchor.id)") if let remainingAnchors = await worldTracking.allAnchors { print("Remaining Anchors: \(remainingAnchors.count)") } } } } catch { print("ARKit session error \(error)") } } } }
1
2
182
May ’25
macOS SwiftData app never syncs with CloudKit
I'm using SwiftData with CloutKit with a very simple app. Data syncs between iOS, iPadOS, and visionOS, but not macOS. From what I can tell, macOS is never getting CK messages unless I'm running the app from Xcode. I can listen for the CK messages and show a line in a debug overlay. This works perfectly when I run from Xcode. I can see the notifications and see updates in my app. However, if I just launch the app outside of Xcode I will never see any changes or notifications. It is as if the Mac app never even tries to contact CloudKit. Schema has been deployed in the CloudKit console. The app is based on the multi-platform Xcode template. Again, only the macOS version has this issue. Is there some extra permission or setting I need to set up in order to use CloudKit on macOS? @State private var publisher = NotificationCenter.default.publisher(for: NSPersistentCloudKitContainer.eventChangedNotification).receive(on: DispatchQueue.main) .onReceive(publisher) { notification in // Listen for changes in CK events if let userInfo = notification.userInfo, let event = userInfo[NSPersistentCloudKitContainer.eventNotificationUserInfoKey] as? NSPersistentCloudKitContainer.Event { let message = "CloudKit Sync: \(event.type.rawValue) - \(event.succeeded ? "Success" : "Failed") - \(event.description)" // Store for UI display syncNotifications.append(message) if syncNotifications.count > 10 { syncNotifications.removeFirst() } } } .overlay(alignment: .topTrailing) { if !syncNotifications.isEmpty { VStack(alignment: .leading) { ForEach(syncNotifications, id: \.self) { notification in Text(notification) .padding(8) } } .frame(width: 800, height: 500) .cornerRadius(8) .background(Color.secondary.opacity(0.2)) .padding() .transition(.move(edge: .top)) } }
1
1
163
May ’25
How to use defaultSize with visionOS window restoration?
One of the most common ways to provide a window size in visionOS is to use the defaultSize scene modifier. WindowGroup(id: "someID") { SomeView() } .defaultSize(CGSize(width: 600, height: 600)) Starting in visionOS 26, using this has a side effect. visionOS 26 will restore windows that have been locked in place or snapped to surfaces. If a user has manually adjusted the size of a locked/snapped window, the users size is only restore in some cases. Manual resize respected Leaving a room and returning later Taking the headset off and putting it back on later Manual resize NOT respected Device restart. In this case, the window is reopened where it was locked, but the size is set back to the values passed to defaultSize. The manual resizing adjustments the user has made are lost. This is counter to how all other windows and widgets work. I reported this last month (FB18429638), but haven't heard back if this is a bug or intended behavior. Questions What is the best way to provide a default window size that will only be used when opening new windows–and not used during scene restoration? Should we try to keep track of window size after users adjust them and save that somewhere? If this is intended behavior, can someone please update the docs accordingly?
1
0
444
Jul ’25
Entities moved with Manipulation Component in visionOS Beta 4 are clipped by volume bounds
In Beta 1,2, and 3, we could pick up and inspect entities, bringing them closer while moving them outside of the bounds of a volume. As of Beta 4, these entities are now clipped by the bounds of the volume. I'm not sure if this is a bug or an intended change, but I files a Feedback report (FB19005083). The release notes don't mention a change in behavior–at least not that I can find. Is this an intentional change or a bug? Here is a video that shows the issue. https://youtu.be/ajBAaSxLL2Y In the previous versions of visionOS 26, I could move these entities out of the volume and inspect them close up. Releasing would return them to the volume. Now they are clipped as soon as they reach the end of the volume. I haven't had a chance to test with windows or with the SwiftUI modifier version of manipulation.
1
4
383
Jul ’25
What's in Reality Composer Pro 26
That title would have made a great WWDC Sessions. Unfortunately, it seems like nothing is new in Reality Composer Pro this year. I've noticed at all versions of the Xcode Beta this summer have shipped with Reality Composer Pro version 2.0. There have been slight bumps in the build number. I haven't found any new features or seen any documentation to indicate that anything has changed. So the question is, what is the state of Reality Composer Pro? Should we continue to use this tool or start doing everything in code? A huge number of Sample Projects use Reality Composer Pro, so it seems like Apple is still using it even if they didn't update it this year.
1
3
456
Aug ’25
Do you retain a reference to your content events in RealityView?
Do you retain a reference to your content (RealityViewContent) events? For example, the Manipulation Events docs from Apple use _ to discard the result. In theory the event should keep working while the content is alive. _ = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false) } _ = content.subscribe(to: ManipulationEvents.WillEnd.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .red, isMetallic: false) } We could store these events in state. I've seen this in a few samples and apps. @State var beginSubscription: EventSubscription? ... beginSubscription = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false) } The main advantage I see is that we can be more explicit about when we remove the event. Are there other reasons to keep a reference to these events?
1
0
576
Sep ’25
How can we move the player within a RealityKit/RealityView scene?
How can we move the player within a RealityKit/RealityView scene? I am not looking for any animation or gradual movement, just instantaneous position changes. I am unsure of how to access the player (the person wearing the headset) and it's transform within the context of a RealityView. The goal is to allow the player to enter a full space in immersive mode and explore a space with various objects. They should be able to select an object and move closer to it.
2
1
1.4k
Jan ’24
RealityKit - Change Material Color or other properties in RealityView
In a RealityView, I have scene loaded from Reality Composer Pro. The entity I'm interacting with has a PhysicallyBasedMaterial with a diffuse color. I want to change that color when on long press. I can get the entity and even get a reference to the material, but I can't seem to change anything about it. What is the best way to change the color of a material at runtime? var longPress: some Gesture { LongPressGesture(minimumDuration: 0.5) .targetedToAnyEntity() .onEnded { value in value.entity.position.y = value.entity.position.y + 0.01 if var shadow = value.entity.components[GroundingShadowComponent.self] { shadow.castsShadow = true value.entity.components.set(shadow) } if let model = value.entity.components[ModelComponent.self] { print("material", model) if let mat = model.materials.first { print("material", mat) // I have a material here but I can't set any properties? // mat.diffuseColor does not exist } } } } Here is the full code struct Lab5026: View { var body: some View { RealityView { content in if let root = try? await Entity(named: "GestureLab", in: realityKitContentBundle) { root.position = [0, -0.45, 0] if let subject = root.findEntity(named: "Cube") { subject.components.set(HoverEffectComponent()) subject.components.set(GroundingShadowComponent(castsShadow: false)) } content.add(root) } } .gesture(longPress.sequenced(before: dragGesture)) } var longPress: some Gesture { LongPressGesture(minimumDuration: 0.5) .targetedToAnyEntity() .onEnded { value in value.entity.position.y = value.entity.position.y + 0.01 if var shadow = value.entity.components[GroundingShadowComponent.self] { shadow.castsShadow = true value.entity.components.set(shadow) } if let model = value.entity.components[ModelComponent.self] { print("material", model) if let mat = model.materials.first { print("material", mat) // I have a material here but I can't set any properties? // mat.diffuseColor does not exist // PhysicallyBasedMaterial } } } } var dragGesture: some Gesture { DragGesture() .targetedToAnyEntity() .onChanged { value in let newPostion = value.convert(value.location3D, from: .global, to: value.entity.parent!) let limit: Float = 0.175 value.entity.position.x = min(max(newPostion.x, -limit), limit) value.entity.position.z = min(max(newPostion.z, -limit), limit) } .onEnded { value in value.entity.position.y = value.entity.position.y - 0.01 if var shadow = value.entity.components[GroundingShadowComponent.self] { shadow.castsShadow = false value.entity.components.set(shadow) } } } }
2
0
1.9k
Mar ’24
Can we create custom SurroundingsEffect in visionOS?
SwiftUI in visionOS has a modifier called preferredSurroundingsEffect that takes a SurroundingsEffect. From what I can tell, there is only a single effect available: .systemDark. ImmersiveSpace(id: "MyView") { MyView() .preferredSurroundingsEffect(.systemDark) } I'd like to create another effect to tint the color of passthrough video if possible. Does anyone know how to create custom SurroundingsEffects?
2
0
699
Jun ’24
How to get the floor plane with Spatial Tracking Session and Anchor Entity
In the WWDC session titled "Deep dive into volumes and immersive spaces", the developers discussed adding a Spatial Tracking Session and an Anchor Entity to detect the floor. They then glossed over some important details. They added a spatial tap gesture to let the user place content relative to the floor anchor, but they left a lot of information. .gesture( SpatialTapGesture( coordinateSpace: .immersiveSpace ) .targetedToAnyEntity() .onEnded { value in handleTapOnFloor(value: value) } ) My understanding is that an entity has to have input and collision components for gestures like this to work. How can we add a collision to an AnchorEntity when we don't know its size or shape? I've been trying for days to understand what is happening here and I just don't get it. It is even more frustrating that the example project that Apple released does not contain any of these features. I would like to be able Detect the floor plane Get the position/transform of the floor plane Add a collider to the floor plane Enable collisions and physics on the floor plane Enable gestures on the floor plane It seems to me that the Anchor Entity is placed as an entirely arbitrary position. It has absolutely no relationship to the rectangle with the floor label that I can see in the Xcode visualization. It is just a point, not a plane or rect that I can use. I've tried manually calculating the collision shape after the anchor is detected, but nothing that I have tried works. I can't tap on the floor with gestures. I can't drop entities onto the floor. I can't seem to do ANYTHING at all with this floor anchor other than place entity at the totally arbitrary location somewhere on the floor. Is there anyway at all with Spatial Tracking Session and Anchor Entity to get the actual plane that was detected? struct FloorExample: View { @State var trackingSession: SpatialTrackingSession = SpatialTrackingSession() @State var subject: Entity? @State var floor: AnchorEntity? var body: some View { RealityView { content, attachments in let session = SpatialTrackingSession() let configuration = SpatialTrackingSession.Configuration(tracking: [.plane]) _ = await session.run(configuration) self.trackingSession = session let floorAnchor = AnchorEntity(.plane(.horizontal, classification: .floor, minimumBounds: SIMD2(x: 0.1, y: 0.1))) floorAnchor.anchoring.physicsSimulation = .none floorAnchor.name = "FloorAnchorEntity" floorAnchor.components.set(InputTargetComponent()) floorAnchor.components.set(CollisionComponent(shapes: .init())) content.add(floorAnchor) self.floor = floorAnchor // This is just here to let me see where visinoOS decided to "place" the floor anchor. let floorPlaced = ModelEntity( mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .black, isMetallic: false)]) floorAnchor.addChild(floorPlaced) if let scene = try? await Entity(named: "AnchorLabsFloor", in: realityKitContentBundle) { content.add(scene) if let subject = scene.findEntity(named: "StepSphereRed") { self.subject = subject } // I can see when the anchor is added _ = content.subscribe(to: SceneEvents.AnchoredStateChanged.self) { event in event.anchor.generateCollisionShapes(recursive: true) // this doesn't seem to work print("**anchor changed** \(event)") print("**anchor** \(event.anchor)") } // place the reset button near the user if let panel = attachments.entity(for: "Panel") { panel.position = [0, 1, -0.5] content.add(panel) } } } update: { content, attachments in } attachments: { Attachment(id: "Panel", { Button(action: { print("**button pressed**") if let subject = self.subject { subject.position = [-0.5, 1.5, -1.5] // Remove the physics body and assign a new one - hack to remove momentum if let physics = subject.components[PhysicsBodyComponent.self] { subject.components.remove(PhysicsBodyComponent.self) subject.components.set(physics) } } }, label: { Text("Reset Sphere") }) }) } } }
2
0
826
Jan ’25