Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Weird Reality Composer Pro Orbit animation bug
Behavior: Orbit animation doesn't show up. Both for OnTap trigger and OnAddedToScene trigger. It is not an issue with my code because I tested with an emphasize float animation and it works perfectly. Environment: ARKit + RealityKit, iOS18 My animation timeline settings: A simple Orbit animation block with a target, a pivot entity. 1s duration, orbit direction clockwise, axis(0,1,0), 1 revolution, and blend layer 300. My Behavior setting: OnTap -> play the animation
1
0
525
Sep ’24
Hand Tracking Palm towards face or not
Hi all, I’m quite new to XR development in general and need some guidance. I want to create a function that simply tells me if my palm is facing me or not (returning a bool), but I honestly have no idea where to start. I saw an earlier Reddit post about 6 months that essentially wanted the same thing I need, but the only response was this: Consider a triangle made up of the wrist, thumb knuckle, and little finger metacarpal (see here for the joints, and note that naming has changed slightly since this WWDC video): the orientation of this triangle (i.e., whether the front or back is visible) seen from the device location should be a very exact indication of whether the user’s palm is showing or not. While I really like this solution, I genuinely have no idea how to code it, and no further code was provided. I’m not asking for the entire implementation, but rather just enough to get me on the right track. Heres basically all I have so far (no idea if this is correct or not): func isPalmFacingDevice(hand: HandSkeleton, devicePosition: SIMD3<Float>) -> Bool { // Get the wrist, thumb knuckle and little finger metacarpal positions as 3D vectors let wristPos = SIMD3<Float>(hand.joint(.wrist).anchorFromJointTransform.columns.3.x, hand.joint(.wrist).anchorFromJointTransform.columns.3.y, hand.joint(.wrist).anchorFromJointTransform.columns.3.z) let thumbKnucklePos = SIMD3<Float>(hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.x, hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.y, hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.z) let littleFingerPos = SIMD3<Float>(hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.x, hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.y, hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.z) }
1
0
511
Sep ’24
VisionOS 2.0 simulator takes several minutes to run
Does anyone have a fix for this or is this a bug? I just updated to Xcode 16 and simulator 2.0 yesterday and running my app previously on 1.2 took just a few seconds to load. With 2.0 it takes several minutes. Even if I launch any of the small Developer apps available in the Vision samples section they all take at least 5 minutes to run. How do I fix this? If I open in device it still launches in less than 10 seconds but that's not always convenient for me. MacBook Pro, M3, 18G, Sonoma 14.6.1
0
0
430
Sep ’24
Multilayer VisionOS App icon not working
I tried to use the application icon from sample project https://developer.apple.com/documentation/visionos/diorama, but the 3 layers of the app icon are not separated when I hover on the icon in the Vision Pro simulator. Could you please advise how to fix the problem? I am using the latest Xcode Version 15.4 (15F31d). Thank you.
2
0
493
Sep ’24
Don't understand RealityKit's entity.move in visionOS immersive space
The set up: I am developing a visionOS app that uses an immersive space. The user sees a board with entities put onto it. My app places the board in front of the default camera and entities with a certain position and orientation relative to the board. Placement and rotation should be animated. The problem: If I place the entities by assigning a Transform to the transform property of the entity directly, i.e. without animation, the result is correct. However I have to use the entity's move(to: function to animate it. And move(to: works in an unexpected way. I thus wrote a little test app, based on Apple's visionOS immersive app template (below). There, the following 5 cases are treated: Set transform directly (without animation). This gives the correct result, and works as expected (without animation). Set transform using move relative to world (without animation). This gives the correct result, although it does not work as expected. I expected "relative to world" means translation and rotation is relativ to world. This seems wrong for translation and right for rotation. Set transform using move relative to parentEntity (without animation). This gives a wrong result, although translation and rotation are defined relative to the parentEntity. Set transform using move relative to world with animation. This gives also a wrong result, and without animation. Set transform using move relative to parentEntity with animation. This gives also a wrong result, and without animation. Here are the screen shots for the cases 1...5: Cases 1 & 2 Case 3 Cases 4 & 5 The question: So, obviously, I don't understand what move(to: does. I would be happy to get any advice what is wrong and how to do it right. Here is the code: import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { @Environment(AppModel.self) var appModel let boardHeight: Float = 0.1 let boxHeight: Float = 0.3 var body: some View { RealityView { content in let boardEntity = makeBoard() content.add(boardEntity) let boxEntity = makeBox(parentEntity: boardEntity) boardEntity.addChild(boxEntity) } } func makeBoard() -> ModelEntity { let mesh = MeshResource.generateBox(width: 1.0, height: boardHeight, depth: 1.0) var material = UnlitMaterial(); material.color.tint = .red let boardEntity = ModelEntity(mesh: mesh, materials: [material]) boardEntity.transform.translation = [0, 0, -3] return boardEntity } func makeBox(parentEntity: Entity) -> ModelEntity { let mesh = MeshResource.generateBox(width: 0.3, height: boxHeight, depth: 0.3) var material = UnlitMaterial(); material.color.tint = .green let boxEntity = ModelEntity(mesh: mesh, materials: [material]) // Set position and orientation of the box // To put the box onto the board, move it up by half height of the board and half height of the box let y_up = boardHeight/2.0 + boxHeight/2.0 let translation = SIMD3<Float>(0, y_up, 0) // Turn the box by 45 degrees around the y axis let rotationY = simd_quatf(angle: Float(45.0 * .pi/180.0), axis: SIMD3(x: 0, y: 1, z: 0)) let transform = Transform(rotation: rotationY, translation: translation) // Do the actual move // 1) Set transform directly (without animation) boxEntity.transform = transform // Translation and rotation correct // 2) Set transform using move relative to world (without animation) // boxEntity.move(to: transform, relativeTo: nil) // Translation and rotation correct // 3) Set transform using move relative to parentEntity (without animation) // boxEntity.move(to: transform, relativeTo: parentEntity) // Translation incorrect, rotation correct // 4) Set transform using move relative to world with animation // boxEntity.move(to: transform, // relativeTo: nil, // duration: 1.0, // timingFunction: .linear) // Translation incorrect, rotation incorrect, no animation // 5) Set transform using move relative to parentEntity with animation // boxEntity.move(to: transform, // relativeTo: parentEntity, // duration: 1.0, // timingFunction: .linear) // 5) Translation incorrect, rotation incorrect, no animation return boxEntity } }
2
0
630
Sep ’24
[NetworkComponent] Cannot find component's entity (guid=***, typeID=***, type=CustomComponentRCPInputTargetComponent, entity=xxxx).
Hi. I recently added SwiftUI context menus and picker menus to my app, but when they are activated they flicker rapidly, and it is impossible to select anything (there is no hover effect either). When these menus are activated, the console prints lots of warning messages similar to this: [NetworkComponent] Cannot find component's entity (guid=14395713952467043328, typeID=295756909031380028, type=CustomComponentRCPInputTargetComponent, entity=0x1047c6750). This issue doesn't seem to happen on visionOS 1.2 simulator, but is reliably reproducible on visionOS 2.0 simulator and device. Any idea what this might be related to? I am attempting to narrow down on the issue but it's challenging to do so without knowing what the error is about. Thanks!
1
0
566
Sep ’24
Position of Game Center dashboard
When opening the Game Center dashboard via the Access Point, the Game Center dashboard appears BEHIND any content in the window with z depth (default type not volumetric). It obscures the dashboard and this makes it unusable. Alerts have the same placement. The new defaultWindowPlacement would probably suffice, but I don't think there's a way to apply that to the Game Center window. What to do? Thanks.
0
0
467
Sep ’24
Multiple floor levels in one story
In lots of houses there are different levels but are still on the same floor. What i mean is that there are things like stairs on the entrance that only have a few steps and would count basically as the same story. RoomPlan already does a nice job recognizing them during the scanning but after the StructureBuilder or the optimization step it is not really satisfying. Has anyone managed to handle those cases? Or do you have to scan a specific way to capture such small differences within a level?
0
0
536
Sep ’24
Add new joint to ARKit skeleton
Hi everyone, I want to add new joint in addition to joints that provided by ARKit. for example extract the position of wrist and elbow, then add new joint between them in the middle of arm. I can't find a good documentation that can explain ARKit very well. If there is another information that I can use, please share it with me. thanks.
0
0
392
Sep ’24
Use visionOS environments during active immersive space?
Hello, Would it be possible to use any of the available visionOS environments when I use an app that requires me to be in an immersive space? I'm developing an app where users can start the immersive space experience by pressing a button. In my case, it would be helpful if the user could still choose a visionOS environment using the Digital Crown, but currently, it seems to be unavailable after opening an immersive space. Thank you very much in advance!
1
0
553
Sep ’24
Is visionpro support facial recognition app?
Hello everyone, I'm a Computer Science student. My supervisor has given me some topics for my final year project, and one of them involves using Vision Pro for facial recognition—specifically, identifying a designated face to display specific information. As a developer, my understanding of Vision Pro is quite limited. I've done some research online and found that Unity and Xcode are used as development tools. Traditionally, facial recognition is done using OpenCV. However, I've come across articles stating that Apple, due to security reasons, cannot implement facial recognition. I’d like to ask if that’s true. Also, with VisionOS 2 featuring object tracking and image tracking, could these methods potentially replace facial recognition?
1
1
611
Sep ’24
How to perform real-time specular reflection in visionOS
In RealityKit, I know that an HDR image is pre-calculated, and through the settings of the ImageBasedLight Component, a specified specular object can reflect the content of the HDR image. If a mirror object is originally very large, such as a large-area continuous glass door, after specifying an IBL image for these glass doors, the image reflected by the mirror will be obviously deformed when it moves in space. Because IBL is a picture of the surrounding environment at a point, while the glass door is a surface. Is there a truly real-time specular reflection calculation setup in RealityKit that can reflect the model on the opposite side of the glass door?
0
0
610
Sep ’24
Issue: ARKit Camera Frame Provider Not Authorized in visionOS App
I’m developing a visionOS app using EnterpriseKit, and I need access to the main camera for QR code detection. I’m using the ARKit CameraFrameProvider and ARKitSession to capture frames, but I’m encountering this error when trying to start the camera stream: ar_camera_frame_provider_t: Failed to start camera stream with error: <ar_error_t Error Domain=com.apple.arkit Code=100 "App not authorized."> Context: VisionOS using EnterpriseKit for camera access and QR code scanning. My Info.plist includes necessary permissions like NSCameraUsageDescription and NSWorldSensingUsageDescription. I’ve added the com.apple.developer.arkit.main-camera-access.allow entitlement as per the official documentation here. My app is allowed camera access as shown in the logs (Authorization status: [cameraAccess: allowed]), but the camera stream still fails to start with the “App not authorized” error. I followed Apple’s WWDC 2024 sample code for accessing the main camera in visionOS from this session. Sample of My Code: import ARKit import Vision class QRCodeScanner: ObservableObject { private var arKitSession = ARKitSession() private var cameraFrameProvider = CameraFrameProvider() private var pixelBuffer: CVPixelBuffer? init() { Task { await requestCameraAccess() } } private func requestCameraAccess() async { await arKitSession.queryAuthorization(for: [.cameraAccess]) do { try await arKitSession.run([cameraFrameProvider]) } catch { print("Failed to start ARKit session: \(error)") return } let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [.left]) guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { return } Task { for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } self.pixelBuffer = mainCameraSample.pixelBuffer // QR Code detection code here } } } } Things I’ve Tried: Verified entitlements in both Info.plist and .entitlements files. I have added the com.apple.developer.arkit.main-camera-access.allow entitlement. Confirmed camera permissions in the privacy settings. Followed the official documentation and WWDC 2024 sample code. Checked my provisioning profile to ensure it supports ARKit camera access. Request: Has anyone encountered this “App not authorized” error when accessing the main camera via ARKit in visionOS using EnterpriseKit? Are there additional entitlements or provisioning profile configurations I might be missing? Any help would be greatly appreciated! I haven't seen any official examples using new API for main camera access and no open source examples either.
9
1
1.1k
Sep ’24
Recording video and using RoomPlan at the same time
Hi, from the 2023 WWDC video on RoomPlan, they mention that it should be possible to integrate photo / video with RoomPlan: https://developer.apple.com/videos/play/wwdc2023/10192/ (at ~2:30) However, when I attempt to use AVFoundation and AVCaptureSession with RoomPlan, I get the simple error of "Cannot Record". So I'm not sure if there is something wrong with my setup/code, or if these two libraries are actually incompatible. Are there any kinds of guides for doing things like this? Am I going in the right direction or should I try a different approach? Happy to share code if necessary. Thanks
2
0
826
Sep ’24
Need to increase 10' x 10' safe area for VR
Hello! I am working on some cool project reconstructions for a client. They design lobby sized installations with LED walls. I am being tasked with converting these over to AR at scale. I've got my first test in headset and it looks great! However, the desire to just walk around more than the 10' x 10' safe area zone totally takes one out of the immersive VR experience - which is pretty counter intuitive. Is there any way for us developers to by pass this hard limit, so that clients who are requesting more room-scale options can actually enjoy this in VR? Alternatively, is there a way to hook up a PS5 controller to a "player start" so I can navigate inside the VR volume? I'm really trying to embrace Reality Composer Pro, but it seems extremely limiting as I wait for Unreal Engine to get its act together. sigh. Thanks for any help or suggestions.
1
0
622
Sep ’24