I recently had a chat with a company in the manufacturing business. They were asking if Vision Pro could be used to guide maintenance workers through maintenance processes, a use-case that is already established on other platforms. I thought Vision Pro would be perfect for this as well, until I read in this article from Apple that object detection is not supported:
https://developer.apple.com/documentation/visionos/bringing-your-arkit-app-to-visionos#Update-your-interface-to-support-visionOS
To me, this sounds like sacrificing a lot of potential for business scenarios, just for the sake of data privacy. Is this really the case, i.e. is there no way to detect real-world objects and place content on top of them? Image recognition would not be enough in this use-case.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm developing a map-based app for visionOS. The loads map data from a server, using JSON. It works just fine, but I noticed the following effect: If I move the app's window around, it freezes; either on the first movement, or on one of the subsequent ones. The map cannot be panned anymore, and all other UI elements lose their interactivity as well.
I noticed this issue before, when I was opening the map on app startup (and here it even happened without moving the window). Since I added a short delay, this was resolved. There was no log message in this case.
However, when I noticed that it also happens if I move the window around, I saw that Xcode logs an error:
+[UIView setAnimationsEnabled:] being called from a background thread. Performing any operation from a background thread on UIView or a subclass is not supported and may result in unexpected and insidious behavior. trace=(
0 UIKitCore 0x0000000185824a24 __42+[UIView(Animation) setAnimationsEnabled:]_block_invoke + 112
1 libdispatch.dylib 0x0000000102a327e4 _dispatch_client_callout + 16
2 libdispatch.dylib 0x0000000102a34284 _dispatch_once_callout + 84
3 UIKitCore 0x0000000185824ad8 +[UIView(Animation) performWithoutAnimation:] + 56
4 SwiftUI 0x00000001c68cf1e0 OUTLINED_FUNCTION_136 + 10376
5 SwiftUI 0x00000001c782bebc OUTLINED_FUNCTION_12 + 22864
6 SwiftUI 0x00000001c78285e8 OUTLINED_FUNCTION_12 + 8316
7 SwiftUI 0x00000001c787c288 OUTLINED_FUNCTION_20 + 39264
8 SwiftUI 0x00000001c787c2cc OUTLINED_FUNCTION_20 + 39332
9 UIKitCore 0x000000018582fc24 -[UIView(CALayerDelegate) layoutSublayersOfLayer:] + 1496
10 QuartzCore 0x000000018a05cf00 _ZN2CA5Layer16layout_if_neededEPNS_11TransactionE + 440
11 QuartzCore 0x000000018a068ad0 _ZN2CA5Layer28layout_and_display_if_neededEPNS_11TransactionE + 124
12 QuartzCore 0x0000000189f80498 _ZN2CA7Context18commit_transactionEPNS_11TransactionEdPd + 460
13 QuartzCore 0x0000000189fb00b0 _ZN2CA11Transaction6commitEv + 652
14 VectorKit 0x00000001938ee620 _ZN2md12HoverSupport18updateHoverProxiesERKNSt3__16vectorINS1_10shared_ptrINS_5LabelEEEN3geo12StdAllocatorIS5_N3mdm9AllocatorEEEEE + 2468
15 VectorKit 0x0000000193afd1cc _ZN2md15StandardLabeler16layoutForDisplayERKNS_13LayoutContextE + 156
16 VectorKit 0x0000000193cf133c _ZN2md16CompositeLabeler16layoutForDisplayERKNS_13LayoutContextE + 52
17 VectorKit 0x0000000193abf318 _ZN2md12LabelManager6layoutERKNS_13LayoutContextEPKNS_20CartographicRendererERKNSt3__113unordered_setINS7_10shared_ptrINS_12LabelMapTileEEENS7_4hashISB_EENS7_8equal_toISB_EEN3geo12StdAllocatorISB_N3mdm9AllocatorEEEEERNS_8PassListE + 2904
18 VectorKit 0x0000000193cad464 _ZN2md9realistic16LabelRenderLayer6layoutERKNS_13LayoutContextE + 464
19 VectorKit 0x0000000193658b54 _ZNSt3__110__function6__funcIZN2md9realistic20RealisticRenderLayer5frameERNS2_13LayoutContextEE3$_0NS_9allocatorIS7_EEFvvEEclEv + 180
20 VectorKit 0x00000001936584cc ___ZN3geo9TaskQueue14queueAsyncTaskENSt3__110shared_ptrINS_4TaskEEEPU28objcproto17OS_dispatch_group8NSObject_block_invoke + 80
21 libdispatch.dylib 0x0000000102a30f98 _dispatch_call_block_and_release + 24
22 libdispatch.dylib 0x0000000102a327e4 _dispatch_client_callout + 16
23 libdispatch.dylib 0x0000000102a3aa80 _dispatch_lane_serial_drain + 916
24 libdispatch.dylib 0x0000000102a3b7c4 _dispatch_lane_invoke + 420
25 libdispatch.dylib 0x0000000102a3c794 _dispatch_workloop_invoke + 864
26 libdispatch.dylib 0x0000000102a481a0 _dispatch_root_queue_drain_deferred_wlh + 324
27 libdispatch.dylib 0x0000000102a475fc _dispatch_workloop_worker_thread + 488
28 libsystem_pthread.dylib 0x0000000103b0f924 _pthread_wqthread + 284
29 libsystem_pthread.dylib 0x0000000103b0e6e4 start_wqthread + 8
I disabled all my withAnimation() statements, and the problem persists. I also thought it might be related to my own network fetches, but I think all apply their changes on the main thread. And when I turn on network logging for my own fetching logic, I do not see any data coming in. I also do not think there should be a reason for it.
How can I debug such a situation, so I know, which call actually threw this message? I'd like to know if it is my code or a bug in the SwiftUI map itself.
We need to debug a website running inside a WkWebView on visionOS. To debug it, I want to connect my desktop Safari to it. However, at least in the simulator there is no option in visionOS' Safari settings to enable Web Debugging. Is this missing, or can it be found elsewhere?
I setup an entity with a collision component on it. But it was hard to target the object for I tap gesture, until I increased the radius quite a bit. Now I am unsure if it is too large. Is there a way to visualize these components somehow, maybe even in a running scene?
Also, I find it pretty confusing that the size is given in cm. This made me wonder if this cm setting is affected by the entity's size at all? In Unity, it's just (local) "units".
I wanted to create a particle effect using particle images I copied from a Unity project. These images are PNGs with an alpha channel. In Unity, these look georgeous, but on visionOS, they look rather weird, since the alpha channel is not respected. All pixel which are not pitch black are full white. Is there a way to change this behavior?
On iOS, Sign in with Apple will provide an e-mail address if the user is logging in for the first time. On all subsequent logins, the e-mail address will be missing. However, this can be reset by removing the app from your Apple ID. If you then try to login again, the e-mail dialog will popup again, and the app will receive this e-mail.
On visionOS, however, the latter does not happen. Even if I have removed the app from my Apple ID, the e-mail dialog won't show up again. The only way to resolve this is to reset the visionOS simulator (haven't tried it on a real device).
I've created an app for visionOS that uses a custom package that includes RealityKitContent as well (as a sub-package). I now want to turn this app into a multi-platform app that also supports iOS.
When I try to compile the app for this platform, I get this error message:
Building for 'iphoneos', but realitytool only supports [xros, xrsimulator]
Thus, I want to exclude the RealityKitContent from my package for iOS, but I don't really know how. The Apple docs are pretty complicated, and ChatGPT did only give me solutions that did not work at all.
I also tried to post this on the Swift forum, but no-one could help me there either - so I am trying my luck here.
Here is my Package.swift file:
// swift-tools-version: 5.10
import PackageDescription
let package = Package(
name: "Overlays",
platforms: [
.iOS(.v17), .visionOS(.v1)
],
products: [
.library(
name: "Overlays",
targets: ["Overlays"]),
],
dependencies: [
.package(
path: "../BackendServices"
),
.package(
path: "../MeteorDDP"
),
.package(
path: "Packages/OverlaysRealityKitContent"
),
],
targets: [
.target(
name: "Overlays",
dependencies: ["BackendServices", "MeteorDDP", "OverlaysRealityKitContent"]
),
.testTarget(
name: "OverlaysTests",
dependencies: ["Overlays"]),
]
)
Based on a recommendation in the Swift forum, I also tried this:
dependencies: [
...
.package(
name: "OverlaysRealityKitContent",
path: "Packages/OverlaysRealityKitContent"
),
],
targets: [
.target(
name: "Overlays",
dependencies: [
"BackendServices", "MeteorDDP",
.product(name: "OverlaysRealityKitContent", package: "OverlaysRealityKitContent", condition: .when(platforms: [.visionOS]))
]
),
...
]
but this won't work either.
The problem seems to be that the package is listed under dependencies, which makes the realitytool kick in. Is there a way to avoid this? I definitely need the RealityKitContent package being part of the Overlay package, since the latter depends on the content (on visionOS). And I would not want to split the package up in two parts (one for iOS and one for visionOS), if possible.
I created an app for visionOS, using Reality Composer Pro. Now I want to turn this app into a multi-platform app for iOS as well.
RCP files are not supported on iOS, however. So I tried to use the "old" Reality Composer instead, but that doesn't seem to work either. Xcode 15 does not include it anymore, and I read online that files created with Xcode 14's Reality Composer cannot be included in Xcode 15 files. Also, Xcode 14 does not run on my M3 Mac with Sonoma.
That's a bummer. What is the recommended way to include 3D content in apps that support visionOS AND iOS?!
(I also read that a solution might be using USDZ for both. But how would that workflow look like? Are there samples out there that support both platforms? Please note that I want to setup the anchors myself, using code. I just need the composing tool to the create 3D content that will be placed on these anchors.)
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
iOS
Reality Composer
Reality Composer Pro
visionOS
We tried out our Unity-based AR app for the very first time under iOS 18 and noticed an immediate, repeatable crash.
When run in Xcode 16, we get this error message:
Assert: /Library/Caches/com.apple.xbs/Sources/AppleCV3D/library/VIO/CAPI/src/SlamAnchor.cpp:37 : HasValidPose()
Assert: /Library/Caches/com.apple.xbs/Sources/AppleCV3D/library/VIO/CAPI/src/SlamAnchor.cpp:37 : HasValidPose()
That's a blocker to us.
We're using Unity 2022.3.27f1.
Hi, if I run an app on the visionOS simulator, I get tons of "garbage" messages in the Xcode logs. Please find some samples below. Because of these messages, I can hardly see really relevant logs. Is there any way to get rid of these?
[0x109015000] Decoding completed without errors
[0x1028c0000] Decoding: C0 0x01000100 0x00003048 0x22111100 0x00000000 11496
[0x1028c0000] Options: 1x-1 [FFFFFFFF,FFFFFFFF] 00054060
[0x1021f3200] Releasing session
[0x1031dfe00] Options: 1x-1 [FFFFFFFF,FFFFFFFF] 00054060
[0x1058eae00] Releasing session
[0x10609c200] Decoding: C0 0x01000100 0x00003048 0x22111100 0x00000000 10901
[0x1058bde00] Decoding: C0 0x01000100 0x0000304A 0x22111100 0x00000000 20910
[0x1028d5200] Releasing session
[0x1060b3600] Releasing session
[0x10881f400] Decoding completed without errors
[0x1058e2e00] Decoding: C0 0x01000100 0x0000304A 0x22111100 0x00000000 9124
[0x1028d1e00] Decoding: C0 0x01000100 0x0000304A 0x22111100 0x00000000 20778
[0x1031dfe00] Decoding completed without errors
[0x1031fe000] Decoding completed without errors
[0x1058e2e00] Options: 256x256 [FFFFFFFF,FFFFFFFF] 00025060```
I love the new SwiftUI APIs for Apple Maps. However, I am missing (or haven't found) quite a number of features, particularly on visionOS.
Besides an easy way to zoom maps, the most important feature for me is marker clustering. If you have a lot of markers on a map, this is an absolute must.
Is there any way to accomplish this?
Is it possible to show a map (or a WkWebView) in a fully-immersive AR or VR view, so it surrounds the user like a panorama?
I am trying to build a visionOS app that uses a map as a central user interface.
This works fine on high zoom levels when there are only a couple of markers present. But as soon as I zoom out and the markers number gets to hundreds or even thousands, the performance gets super, super bad. It takes seconds for the map to render, and pans are also laggy. What makes things worse is that the SwiftUI map does not support clustering yet.
Has anyone found a solution to this?
I found this example by Apple about how to implement clustering:
https://developer.apple.com/documentation/mapkit/mkannotationview/decluttering_a_map_with_mapkit_annotation_clustering
It works, but it's using UIKit and storyboards and I could not get it transformed into SwiftUI compatible code.
I also found this blog post that created a neat SwiftUI integration for a clusterable map:
https://www.linkedin.com/pulse/map-clustering-swiftui-dmitry-%D0%B2%D0%B5l%D0%BEv-j3x7f/
However, I wasn't able to adapt it so the map would update itself in a reactive way. I want to retrieve new data from our server if the user changes the visible region of the map and zooms in or out. I have no clue how to transfer my .onChange(of:) and .onMapCameraChange() modifiers to the UIKit world.
I'd like to map a SwiftUI view (in my case: a map) onto a 3D curved plane in immersive view, so user can literally immersive themselves into the map. The user should also be able to interact with the map, by panning it around and selecting markers.
Is this possible?
I would like to add text to a Reality Composer Pro scene and set the actual text via code. How can I achieve this? I haven't seen any "Text" element in the editor.