Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

RealityKit Trace Metric Max/Range for VisionOS app
Hi Nathaniel, I spoke with you yesterday in the WWDC lab. Thanks for chatting with me! Is it possible to get a link to a doc that has some key metrics I'd find in a RealityKit trace so I know if that metric is exceeding limits and probably causing a problem? Right now, I just see numbers and have no idea if a metric is high or low :). This is specifically for a VisionOS app. Thanks, Bob
3
0
142
Jun ’25
Setting clip shape of a RealityView
I am following this example to create a stereoscopic image: https://developer.apple.com/documentation/visionos/creating-stereoscopic-image-in-visionos I would also like to add corner radius to the stereoscopic RealityView. With ordinary SwiftUI views, we typically just use .clipShape(RoundedRectangle(cornerRadius: 32)): struct StereoImage: View { var body: some View { let spacing: CGFloat = 10.0 let padding: CGFloat = 40.0 VStack(spacing: spacing) { Text("Stereoscopic Image Example") .font(.largeTitle) RealityView { content in let creator = StereoImageCreator() guard let entity = await creator.createImageEntity() else { print("Failed to create the stereoscopic image entity.") return } content.add(entity) } .frame(depth: .zero) } .padding(padding) .clipShape(RoundedRectangle(cornerRadius: 32)) // <= HERE! } } This doesn't seem to actually clip the RealityView shown in the sample above. I am guessing this is due to the fact that the box in the RealityView has a non-zero z scale, which means it isn't on the same "layer" as its SwiftUI containers, and thus isn't clipped by the modifiers apply to the containers. How can I properly apply a clipshape to RealityViews like this? Thanks!
3
0
491
Feb ’25
[WWDC25] For GuessTogether, can you initiate a FaceTime call via the custom SharePlay button?
Hello, For GuessTogether source code, it seems like the code assumes that you're already in a FaceTime call before pressing the custom SharePlay button (labeled "Play Guess Together"). If not already on a FaceTime call, my Apple Vision Pro and the visionOS simulator both do nothing after throwing warnings. Is this intended behavior? If so, how do I make it so that pressing the button can also initiate FaceTime calls? Is this allowed? Thank you!
3
0
132
Sep ’25
MainActor attribute on RealityKit APIs is causing problems
Hello, A lot of the RealityKit APIs (Ex. LowLevelMesh, LowLevelTexture, etc.) are marked with MainActor so they needed to be accessed on the main thread. This creates issues when we need to perform expensive GPU related operations since now we need to perform those on the main thread. This results in bottlenecks and hangs in our application. We would like to use a multi-threaded approach to solve these problems which is difficult to do here. We are constantly streaming data whether the app is just appearing or the user is interacting with our application so we need to be able to perform these operations on a separate thread. Any advice on how to achieve this using RealityKit? Thank you.
3
8
216
Mar ’25
Is there a way to scale a RealityKit ShapeResource?
I can generate a ShapeResource from a ReakityKit entity's extents. Could I apply some scaling to the generated shape. Is there a way to do that? // model is a ModelResource and bounds is a BoundingBox var shape = ShapeResource.generateConvex(from: model.mesh); shape = shape.offsetBy(translation: bounds.center) // How can I scale the shape to fit within the bounds? The following API only provide the rotation and translation support. and I cannot find the scale support. offsetBy(rotation: simd_quatf = simd_quatf(ix: 0, iy: 0, iz: 0, r: 1), translation: SIMD3<Float> = SIMD3<Float>()) I can put the ShapeResource on an entity and scale the entity. But, I would like to know if it is possible to scale the ShapeResource itself without attaching it to an entity.
3
0
589
Feb ’25
WorldTrackingProvider stops working on device
After re-launching the immersive space in my app 5-10 times, the WorldTrackingProvider stops working. Only restarting the app will allow it to start working again. Only on device, not the simulator. I get these errors when it happens: The device_anchor can only be queried when the world tracking provider is running. ARPredictorRemoteService <0x107cbb5e0>: Service configured with error: Error Domain=com.apple.arkit.error Code=501 "(null)" Remote Service was invalidated: <ARPredictorRemoteService: 0x107cbb5e0>, will stop all data_providers. ARRemoteService: remote object proxy failed with error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service with pid 81 named com.apple.arkit.service.session was invalidated from this process." UserInfo={NSDebugDescription=The connection to service with pid 81 named com.apple.arkit.service.session was invalidated from this process.} ARRemoteService: weak self released before invalidation @Observable class VisionPro { let session = ARKitSession() let worldTracking = WorldTrackingProvider() func transformMatrix() async -> simd_float4x4 { guard let deviceAnchor = worldTracking.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else { return .init() } return deviceAnchor.originFromAnchorTransform } func runArkitSession() async { Task { try? await session.run([worldTracking]) } } } which I call from my RealityView: .task { await visionPro.runArkitSession() }
3
0
441
Feb ’25
Reading scenePhase from custom Scene
Hi, I've encountered a thread where an Apple engineer points out that there are 2 possible ways to anchor scenePhase, either App or View implementation: https://developer.apple.com/forums/thread/757429 This thread also links to documentation which states If you read the phase from within a custom Scene instance, the value similarly reflects an aggregation of all the scenes that make up the custom scene: This doesn't seem to be the case on visionOS 2, I tried the following code starting from an empty app template: import SwiftUI @main struct SceneTestApp: App { var body: some Scene { MyScene() WindowGroup(id: "extra") { Text("Extra window") } } } struct MyScene: Scene { @Environment(\.scenePhase) private var scenePhase @Environment(\.openWindow) private var openWindow var body: some Scene { WindowGroup { ContentView() .onAppear { openWindow(id: "extra") } } .onChange(of: scenePhase) { oldValue, newValue in print("scenePhase changed") } } } The result was that I didn't get onChange callback if I only closed the extra window, the callback only came after I closed both windows and the whole app was suspended. Is this expected behavior?
3
0
382
Feb ’25
When placing a TextField within a RealityViewAttachment, the virtual keyboard does not appear in front of the user as expected.
Hello, Thank you for your time. I have a question regarding visionOS app development. When placing a SwiftUI TextField inside RealityView.attachments, we found that focusing on the field does not bring up the virtual keyboard in front of the user. Instead, the keyboard appears around the user’s lower abdomen area. However, when placing the same TextField in a regular SwiftUI layer outside of RealityView, the keyboard appears in the correct position as expected. This suggests that the issue is specific to RealityView.attachments. We are currently exploring ways to have the virtual keyboard appear directly in front of the user when using TextField inside RealityViewAttachments. If there is any method to explicitly control the keyboard position or any known workarounds—including alternative UI approaches—we would greatly appreciate your guidance. Best regards, Sadao Tokuyama
3
1
639
Jul ’25
Unity on VisionOS development - best practice on structuring a project
Hello, I am experimenting with Unity to develop a mixed reality (MR) application for visionOS. I would like to understand the best approach for structuring my project: Should I build the entire experience in Unity (both Windows and Volumes)? Or is it better to create only certain elements (e.g., Volumes) in Unity while managing Windows separately in Xcode? Also, how well do interactions (e.g pinch, grab…) created in Unity integrate with Xcode? If I use the PolySpatial plugin, does that allow me to manage all interactions entirely within Unity, or would I still need to handle/integrate part of it in Xcode? What's worked best for you? Please let me know if you have any recommendations, Thanks!
3
0
165
Apr ’25
Loading USDZ asset into Model3D causes visionOS 2.0 beta 5 to crash
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug. After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere. This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
3
1
584
Jul ’25
Unable to Create a Fully Immersive Experience That Hides Other Windows in visionOS App
Description: I'm developing a travel/panorama viewing app for visionOS that allows users to view 360° panoramic images in an immersive space. When users enter panorama viewing mode, I want to provide a fully immersive experience where the main interface window and Earth 3D globe window are hidden. I've implemented the app following Apple's documentation on Creating Fully Immersive Experiences, but when users enter the immersive space, both the main window and the Earth 3D window remain visible, diminishing the immersive experience. Implementation Details: My app has three main components: A main content window showing panorama thumbnails A 3D globe window (volumetric) showing locations An immersive space for viewing 360° panoramas I'm using .immersionStyle(selection: $panoImageView, in: .full) to create a fully immersive experience, but other windows remain visible. Relevant Code: @main struct Travel_ImmersiveApp: App { @StateObject private var appModel = AppModel() @State private var panoImageView: ImmersionStyle = .full var body: some Scene { WindowGroup { ContentView() .environmentObject(appModel) } .windowStyle(.automatic) .defaultSize(width: 1280, height: 825) WindowGroup(id: "Earth") { Globe3DView() .environmentObject(appModel) .onAppear { appModel.isGlobeWindowOpen = true appModel.globeWindowOpen = true } .onDisappear { if !appModel.shouldCloseApp { appModel.handleGlobeWindowClose() } } } .windowStyle(.volumetric) .defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters) .windowResizability(.contentSize) ImmersiveSpace(id: "ImmersiveView") { ImmersiveView() .environmentObject(appModel) } .immersionStyle(selection: $panoImageView, in: .full) } } Opening the Immersive Space: func getPanoImageAndOpenImmersiveSpace() async { appModel.clearMemoryCache() do { let canView = appModel.canViewImage(image) if canView { let downloadedImage = try await appModel.getPanoramaImage(for: image) { progress in Task { @MainActor in cardState = .loading(progress: progress) } } await MainActor.run { appModel.updateCurrentImage(image, panoramaImage: downloadedImage) } if !appModel.immersiveSpaceOpened { try await openImmersiveSpace(id: "ImmersiveView") await MainActor.run { appModel.immersiveSpaceOpened = true cardState = .normal } } else { await MainActor.run { appModel.updateImmersiveView = true cardState = .normal } } } else { await MainActor.run { appModel.errorMessage = "You do not have permission to view this image." cardState = .normal } } } catch { // Error handling } } Immersive View Implementation: struct ImmersiveView: View { @EnvironmentObject var appModel: AppModel var body: some View { RealityView { content in let rootEntity = Entity() content.add(rootEntity) Task { if let selectedImage = appModel.selectedImage, appModel.canViewImage(selectedImage) { await loadPanorama(for: rootEntity) } } } update: { content in if appModel.updateImmersiveView, let selectedImage = appModel.selectedImage, appModel.canViewImage(selectedImage), let rootEntity = content.entities.first { Task { await loadPanorama(for: rootEntity) appModel.updateImmersiveView = false } } } .onAppear { print("ImmersiveView appeared") } .onDisappear { appModel.resetImmersiveState() } } // loadPanorama implementation... } What I've Tried Set immersionStyle to .full as recommended in the documentation Confirmed that the immersive space is properly opened and displaying panoramas Verified that the state management for the immersive space is working correctly Questions How can I ensure that when the user enters the immersive panorama viewing experience, all other windows (main interface and Earth 3D globe) are automatically hidden? Is there a specific API or approach I'm missing to properly implement a fully immersive experience that hides all other windows? Do I need to manually dismiss the windows when opening the immersive space, and if so, what's the best approach for doing this? Any guidance or sample code would be greatly appreciated. Thank you!
3
0
202
Apr ’25
Cannot extract imagePair from generated Spatial Photos
Hi I am trying to implement something simple as people can share their Spatial Photos with others (just like this post). I encountered the same issue with him, but his answer doesn't help me out here. Briefly speaking, I am using CGImgaeSoruce to extract paired leftImage and rightImage from one fetched spatial photo let photos = PHAsset.fetchAssets(with: .image, options: nil) // enumerating photos .... if asset.mediaSubtypes.contains(PHAssetMediaSubtype.spatialMedia) { spatialAsset = asset } // other code show below I can fetch left and right images from native Spatial Photo (taken by Apple Vision Pro or iPhone 15+), but it didn't work on generated spatial photo (2D -> 3D feat in Photos). // imageCount is 1 when it comes to generated spatial photo let imageCount = CGImageSourceGetCount(source) I searched over the net and someone says the generated version is having a depth image instead of left/right pair. But still I cannot extract any depth image from imageSource. The full code below, the imagePair extraction will stop at "no groups found": func extractPairedImage(phAsset: PHAsset, completion: @escaping (StereoImagePair?) -> Void) { let options = PHImageRequestOptions() options.isNetworkAccessAllowed = true options.deliveryMode = .highQualityFormat options.resizeMode = .none options.version = .original return PHImageManager.default().requestImageDataAndOrientation(for: phAsset, options: options) { imageData, _, _, _ in guard let imageData, let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil) else { completion(nil) return } let stereoImagePair = stereoImagePair(from: imageSource) completion(stereoImagePair) } } } func stereoImagePair(from source: CGImageSource) -> StereoImagePair? { guard let properties = CGImageSourceCopyProperties(source, nil) as? [CFString: Any] else { return nil } let imageCount = CGImageSourceGetCount(source) print(String(format: "%d images found", imageCount)) guard let groups = properties[kCGImagePropertyGroups] as? [[CFString: Any]] else { /// function returns here print("no groups found") return nil } guard let stereoGroup = groups.first(where: { let groupType = $0[kCGImagePropertyGroupType] as! CFString return groupType == kCGImagePropertyGroupTypeStereoPair }) else { return nil } guard let leftIndex = stereoGroup[kCGImagePropertyGroupImageIndexLeft] as? Int, let rightIndex = stereoGroup[kCGImagePropertyGroupImageIndexRight] as? Int, let leftImage = CGImageSourceCreateImageAtIndex(source, leftIndex, nil), let rightImage = CGImageSourceCreateImageAtIndex(source, rightIndex, nil), let leftProperties = CGImageSourceCopyPropertiesAtIndex(source, leftIndex, nil), let rightProperties = CGImageSourceCopyPropertiesAtIndex(source, rightIndex, nil) else { return nil } return (leftImage, rightImage, self.identifier) } Any suggestion? Thanks visionOS 2.4
3
0
183
Jun ’25
Looking for a way to implement the video display effect in Apple's 'Spatial Gallery'
Hi guys, I noticed that Apple created a really engaging visual effect for browsing spatial videos in the app. The video appears embedded in glass panel with glowing edges and even shows a parallax effect as you move around. When I tried to display the stereo video using RealityView, however, the video entity always floats above the panel. May I ask how does VisionOS implement this effect? Is there any approach to achieve this effect or example code I can use in my own code. Thanks!
3
1
223
Jun ’25
Degraded RoomPlan performance
We have been using RoomPlan in our app for 2+ years. Through a combination of in-app and manual coaching on scanning best practices, most users are able to achieve high-quality scans on a consistent basis. In recent weeks, however, we have observed an increase in reports of degraded scanning performance, even from veteran users who had not previously encountered issues. The RoomCaptureView overlay is jittery and crooked, and the resulting scan file has significant issues, even for simple, well-lit rectangular rooms. It is difficult to troubleshoot these issues given the number of variables at play, and the overall volume of reports is still relatively low, but we'd appreciate any guidance on known issues or workarounds that could help unblock our users who are being affected by this. I noticed that this post includes an acknowledgement of FB14454922 and FB15035788. Our issues seem slightly different as the scans are simply inaccurate and jittery without failing outright. I haven't found any other threads on similar issues.
3
2
2.1k
2w
visionOS pushWindow being dismissed on app foreground
We seen to have found an issue when using the pushWindow action on visionOS. The issue occurs if the app is backgrounded then reopened by selecting the apps icon on the home screen. Any window that is opened via the pushWindow action is then dismissed. We've been able to replicate the issue in a small sample project. Replication steps Open app Open window via the push action Press the digital crown On the home screen select the apps icon again The pushed window will now be dismissed. There is a sample project linked here that shows off the issue, including a video of the bug in progress
3
1
785
3w
RealityKit Entity ComponentSet does not conform to Sequence?
Hello, I'm trying to view the components of an Entity I'm creating in RealityKit by reading from a USDZ file. I have the following code snippet in my app. if let appleEntity = try? Entity.loadModel(named: "apple_tile") { let c = appleEntity.components for comp in c { // <- compiler error here print(comp) } } The compiler error I'm receiving says "For-in loop requires 'Entity.ComponentSet' to conform to 'Sequence'". However, I thought this was the case, according to the documentation for Entity.ComponentSet? Curious if anyone else has had this problem. Running XCode 15.4, and my Swift version is xcrun swift -version swift-driver version: 1.90.11.1 Apple Swift version 5.10 (swiftlang-5.10.0.13 clang-1500.3.9.4) Target: x86_64-apple-macosx14.0
3
0
419
Mar ’25
visionOS Simulator Rotate and Scale gestures difficult to register (capture)
We were having an issue wrb the system rotate and scale gestures (two-handed gestures / RotateGesture3D and MagnifyGesture) were extremely difficult to register (make work) in the visionOS simulator. The solution we found was to: Launch your app in the simulator Move the pointer on top of the 3D object for which you are testing rotation and scaling gestures. Press and hold the Option key to display touch points (ie: the two-handed gesture points). While maintaining the option key pressed, release the pointer and re-enable it again. I am using a track pad with tap-to-click enabled and three-finger to drag enabled in accessibility, so "release the pointer and re-enable it again" translates simply to removing the three finger and placing them again on the trackpad. If you have maintained the option key pressed, then you should now be able to rotate and scale the 3D object. Context if you are interested: Our issue was also occurring in Apple's own sample project relating to gestures "Transforming RealityKit entities using gestures", at below link. On Apple's article "Interacting with your app in the visionOS simulator" at the below link, for two-handed gestures it states "Press and hold the Option key to display touch points. Move the pointer while pressing the Option key to change the distance between the touch points. Move the pointer and hold the Shift and Option keys to reposition the touch points." This simply did not work anymore for rotation and scaling gestures. These gestures used to be a lot more responsive in Sonoma. Either the article should be updated to what I described above, or there is an issue. Our colleague who is using macOS Sonoma 14.6.1 with the latest release of Xcode is not having these issues. Here is the list of configurations (troubleshooting we tried!) where it is difficult to achieve rotation and scaling gestures in the visionOS simulator: macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.1 macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.0 macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.1 macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.0 macOS Sequoia 16.1 Beta, remove all Xcodes and installed the build from AppStore (Xcode 16.1) macOS Sequoia 16.1 Beta, Xcode 16.0 w visionOS 2.0 completely wiped out, and reset entire development machine, re-installed latest releases of sequoia (15.1) and xcode (15.1)) Throughout these troubleshooting I often: restarted both xcode and sim erased all derived data erased all contents and settings from sims performed fresh git clones None of the above worked, only the workaround described above works atm. As you can maybe deduce, it was very time consuming to find the workaround, we also wasted some development effort thinking our gesture development was no-good. Hopefully this will help other devs. Article Link: https://developer.apple.com/documentation/xcode/interacting-with-your-app-in-the-visionos-simulator Gesture sample project link: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
3
0
1.1k
Oct ’25