We are building an app that uses ARKit occasionally, but not always.
We would like to test the non-ARKit parts in the simulator, since it offers more debugging features (e.g. SwiftUI previews or the Thread Sanitizer).
However, we can't even build the app for the simulator, since the simulator SDK does not know about certain classes (e.g. "AnchorEntity"). This also means that none of the SwiftUI previews work, even if the views are not using ARKit.
What is the best approach to test such an app in the simulator, without using any ARKit features?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I recently had a chat with a company in the manufacturing business. They were asking if Vision Pro could be used to guide maintenance workers through maintenance processes, a use-case that is already established on other platforms. I thought Vision Pro would be perfect for this as well, until I read in this article from Apple that object detection is not supported:
https://developer.apple.com/documentation/visionos/bringing-your-arkit-app-to-visionos#Update-your-interface-to-support-visionOS
To me, this sounds like sacrificing a lot of potential for business scenarios, just for the sake of data privacy. Is this really the case, i.e. is there no way to detect real-world objects and place content on top of them? Image recognition would not be enough in this use-case.
I'm developing a map-based app for visionOS. The loads map data from a server, using JSON. It works just fine, but I noticed the following effect: If I move the app's window around, it freezes; either on the first movement, or on one of the subsequent ones. The map cannot be panned anymore, and all other UI elements lose their interactivity as well.
I noticed this issue before, when I was opening the map on app startup (and here it even happened without moving the window). Since I added a short delay, this was resolved. There was no log message in this case.
However, when I noticed that it also happens if I move the window around, I saw that Xcode logs an error:
+[UIView setAnimationsEnabled:] being called from a background thread. Performing any operation from a background thread on UIView or a subclass is not supported and may result in unexpected and insidious behavior. trace=(
0 UIKitCore 0x0000000185824a24 __42+[UIView(Animation) setAnimationsEnabled:]_block_invoke + 112
1 libdispatch.dylib 0x0000000102a327e4 _dispatch_client_callout + 16
2 libdispatch.dylib 0x0000000102a34284 _dispatch_once_callout + 84
3 UIKitCore 0x0000000185824ad8 +[UIView(Animation) performWithoutAnimation:] + 56
4 SwiftUI 0x00000001c68cf1e0 OUTLINED_FUNCTION_136 + 10376
5 SwiftUI 0x00000001c782bebc OUTLINED_FUNCTION_12 + 22864
6 SwiftUI 0x00000001c78285e8 OUTLINED_FUNCTION_12 + 8316
7 SwiftUI 0x00000001c787c288 OUTLINED_FUNCTION_20 + 39264
8 SwiftUI 0x00000001c787c2cc OUTLINED_FUNCTION_20 + 39332
9 UIKitCore 0x000000018582fc24 -[UIView(CALayerDelegate) layoutSublayersOfLayer:] + 1496
10 QuartzCore 0x000000018a05cf00 _ZN2CA5Layer16layout_if_neededEPNS_11TransactionE + 440
11 QuartzCore 0x000000018a068ad0 _ZN2CA5Layer28layout_and_display_if_neededEPNS_11TransactionE + 124
12 QuartzCore 0x0000000189f80498 _ZN2CA7Context18commit_transactionEPNS_11TransactionEdPd + 460
13 QuartzCore 0x0000000189fb00b0 _ZN2CA11Transaction6commitEv + 652
14 VectorKit 0x00000001938ee620 _ZN2md12HoverSupport18updateHoverProxiesERKNSt3__16vectorINS1_10shared_ptrINS_5LabelEEEN3geo12StdAllocatorIS5_N3mdm9AllocatorEEEEE + 2468
15 VectorKit 0x0000000193afd1cc _ZN2md15StandardLabeler16layoutForDisplayERKNS_13LayoutContextE + 156
16 VectorKit 0x0000000193cf133c _ZN2md16CompositeLabeler16layoutForDisplayERKNS_13LayoutContextE + 52
17 VectorKit 0x0000000193abf318 _ZN2md12LabelManager6layoutERKNS_13LayoutContextEPKNS_20CartographicRendererERKNSt3__113unordered_setINS7_10shared_ptrINS_12LabelMapTileEEENS7_4hashISB_EENS7_8equal_toISB_EEN3geo12StdAllocatorISB_N3mdm9AllocatorEEEEERNS_8PassListE + 2904
18 VectorKit 0x0000000193cad464 _ZN2md9realistic16LabelRenderLayer6layoutERKNS_13LayoutContextE + 464
19 VectorKit 0x0000000193658b54 _ZNSt3__110__function6__funcIZN2md9realistic20RealisticRenderLayer5frameERNS2_13LayoutContextEE3$_0NS_9allocatorIS7_EEFvvEEclEv + 180
20 VectorKit 0x00000001936584cc ___ZN3geo9TaskQueue14queueAsyncTaskENSt3__110shared_ptrINS_4TaskEEEPU28objcproto17OS_dispatch_group8NSObject_block_invoke + 80
21 libdispatch.dylib 0x0000000102a30f98 _dispatch_call_block_and_release + 24
22 libdispatch.dylib 0x0000000102a327e4 _dispatch_client_callout + 16
23 libdispatch.dylib 0x0000000102a3aa80 _dispatch_lane_serial_drain + 916
24 libdispatch.dylib 0x0000000102a3b7c4 _dispatch_lane_invoke + 420
25 libdispatch.dylib 0x0000000102a3c794 _dispatch_workloop_invoke + 864
26 libdispatch.dylib 0x0000000102a481a0 _dispatch_root_queue_drain_deferred_wlh + 324
27 libdispatch.dylib 0x0000000102a475fc _dispatch_workloop_worker_thread + 488
28 libsystem_pthread.dylib 0x0000000103b0f924 _pthread_wqthread + 284
29 libsystem_pthread.dylib 0x0000000103b0e6e4 start_wqthread + 8
I disabled all my withAnimation() statements, and the problem persists. I also thought it might be related to my own network fetches, but I think all apply their changes on the main thread. And when I turn on network logging for my own fetching logic, I do not see any data coming in. I also do not think there should be a reason for it.
How can I debug such a situation, so I know, which call actually threw this message? I'd like to know if it is my code or a bug in the SwiftUI map itself.
We need to debug a website running inside a WkWebView on visionOS. To debug it, I want to connect my desktop Safari to it. However, at least in the simulator there is no option in visionOS' Safari settings to enable Web Debugging. Is this missing, or can it be found elsewhere?
I setup an entity with a collision component on it. But it was hard to target the object for I tap gesture, until I increased the radius quite a bit. Now I am unsure if it is too large. Is there a way to visualize these components somehow, maybe even in a running scene?
Also, I find it pretty confusing that the size is given in cm. This made me wonder if this cm setting is affected by the entity's size at all? In Unity, it's just (local) "units".
I wanted to create a particle effect using particle images I copied from a Unity project. These images are PNGs with an alpha channel. In Unity, these look georgeous, but on visionOS, they look rather weird, since the alpha channel is not respected. All pixel which are not pitch black are full white. Is there a way to change this behavior?
On iOS, Sign in with Apple will provide an e-mail address if the user is logging in for the first time. On all subsequent logins, the e-mail address will be missing. However, this can be reset by removing the app from your Apple ID. If you then try to login again, the e-mail dialog will popup again, and the app will receive this e-mail.
On visionOS, however, the latter does not happen. Even if I have removed the app from my Apple ID, the e-mail dialog won't show up again. The only way to resolve this is to reset the visionOS simulator (haven't tried it on a real device).
I've created an app for visionOS that uses a custom package that includes RealityKitContent as well (as a sub-package). I now want to turn this app into a multi-platform app that also supports iOS.
When I try to compile the app for this platform, I get this error message:
Building for 'iphoneos', but realitytool only supports [xros, xrsimulator]
Thus, I want to exclude the RealityKitContent from my package for iOS, but I don't really know how. The Apple docs are pretty complicated, and ChatGPT did only give me solutions that did not work at all.
I also tried to post this on the Swift forum, but no-one could help me there either - so I am trying my luck here.
Here is my Package.swift file:
// swift-tools-version: 5.10
import PackageDescription
let package = Package(
name: "Overlays",
platforms: [
.iOS(.v17), .visionOS(.v1)
],
products: [
.library(
name: "Overlays",
targets: ["Overlays"]),
],
dependencies: [
.package(
path: "../BackendServices"
),
.package(
path: "../MeteorDDP"
),
.package(
path: "Packages/OverlaysRealityKitContent"
),
],
targets: [
.target(
name: "Overlays",
dependencies: ["BackendServices", "MeteorDDP", "OverlaysRealityKitContent"]
),
.testTarget(
name: "OverlaysTests",
dependencies: ["Overlays"]),
]
)
Based on a recommendation in the Swift forum, I also tried this:
dependencies: [
...
.package(
name: "OverlaysRealityKitContent",
path: "Packages/OverlaysRealityKitContent"
),
],
targets: [
.target(
name: "Overlays",
dependencies: [
"BackendServices", "MeteorDDP",
.product(name: "OverlaysRealityKitContent", package: "OverlaysRealityKitContent", condition: .when(platforms: [.visionOS]))
]
),
...
]
but this won't work either.
The problem seems to be that the package is listed under dependencies, which makes the realitytool kick in. Is there a way to avoid this? I definitely need the RealityKitContent package being part of the Overlay package, since the latter depends on the content (on visionOS). And I would not want to split the package up in two parts (one for iOS and one for visionOS), if possible.
I created an app for visionOS, using Reality Composer Pro. Now I want to turn this app into a multi-platform app for iOS as well.
RCP files are not supported on iOS, however. So I tried to use the "old" Reality Composer instead, but that doesn't seem to work either. Xcode 15 does not include it anymore, and I read online that files created with Xcode 14's Reality Composer cannot be included in Xcode 15 files. Also, Xcode 14 does not run on my M3 Mac with Sonoma.
That's a bummer. What is the recommended way to include 3D content in apps that support visionOS AND iOS?!
(I also read that a solution might be using USDZ for both. But how would that workflow look like? Are there samples out there that support both platforms? Please note that I want to setup the anchors myself, using code. I just need the composing tool to the create 3D content that will be placed on these anchors.)
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
iOS
Reality Composer
Reality Composer Pro
visionOS
We tried out our Unity-based AR app for the very first time under iOS 18 and noticed an immediate, repeatable crash.
When run in Xcode 16, we get this error message:
Assert: /Library/Caches/com.apple.xbs/Sources/AppleCV3D/library/VIO/CAPI/src/SlamAnchor.cpp:37 : HasValidPose()
Assert: /Library/Caches/com.apple.xbs/Sources/AppleCV3D/library/VIO/CAPI/src/SlamAnchor.cpp:37 : HasValidPose()
That's a blocker to us.
We're using Unity 2022.3.27f1.
I just recently saw a message in the Unity forums, by a Unity staff member, that Apple requires an Apple Silicon based Mac (M1, M2) in order to build apps for the Vision Pro glasses. This confused me since the simulator works just fine on my Intel Mac. Is there any official statement from Apple on this? It would be weird to buy a new Mac just because of this.
Our iOS app relies heavily on the ability to place objects in arbitrary locations, and we would like to know if this is possible on visionOS as well.
It should work like this: The user faces into a certain direction. We place an object approx. 5m in front of the user. The object then gets pinned to this position (in air) and won't move any more. It should not be anchored to a real-world item like a wall, the floor or a desk.
Placing the object should even work, if the user looks down while placing the object. The object should then appear 5m in front of him once he looks up.
On iOS, we implemented this using Unity and AR Foundation on iOS. For visionOS, we haven't decided yet if we go for native instead. So, if that's only possible using native code, that's also fine.
In the WWDC23 sessions it was mentioned that the device won't support taking photos or recording videos through the cameras. Which I think is a huge limitation. However, in another forum I read that it actually works, using AVFoundation. So I went back into the docs, and they said it was not possible.
Hence, I am pretty confused. Has anyone tried this out yet and confirm whether camera access is blocked completely or not? For our app, it would be a bummer if it was.
Is it possible to render a Safari-based webview in full immersive space, so an app can show web pages there?
We're an AR startup in the US, but our founders live in Europe. We definitely want to order the VP once it gets available in the States, but I just saw in my mail that Apple requires a US prescription if you wear glasses. This is a bummer for us. We can forward VP to Europe, but we won't be able to travel to the States just to get such a prescription. Why can't Apple just accept any prescription from an optician?!