We tried out our Unity-based AR app for the very first time under iOS 18 and noticed an immediate, repeatable crash.
When run in Xcode 16, we get this error message:
Assert: /Library/Caches/com.apple.xbs/Sources/AppleCV3D/library/VIO/CAPI/src/SlamAnchor.cpp:37 : HasValidPose()
Assert: /Library/Caches/com.apple.xbs/Sources/AppleCV3D/library/VIO/CAPI/src/SlamAnchor.cpp:37 : HasValidPose()
That's a blocker to us.
We're using Unity 2022.3.27f1.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
On my macBook Pro 2019 with Intel processor, I could run apps in the visionOS simulator without any problems when I was running macOS Ventura. But since I upgraded the Mac to Sonoma, the visionOS simulator seems to be broken.
The display in Xcode sticks to "Loading visionOS 1.0", and the simulator page under "Devices and Simulators" says "No runtime".
This is independent of which Xcode version I am using. I used Xcode 15 beta2, but also tried out more recent versions.
Could it be that developing on Intel Macs was dropped on macOS Sonoma without any notice? I can see that the Xcode 15.1 specs state you need a Silicon Mac, but the Xcode 15 specs don't. And it worked for me, at least on Ventura. The "only" change I made since was upgrading the OS to Sonoma.
I'm developing a map-based app for visionOS. The loads map data from a server, using JSON. It works just fine, but I noticed the following effect: If I move the app's window around, it freezes; either on the first movement, or on one of the subsequent ones. The map cannot be panned anymore, and all other UI elements lose their interactivity as well.
I noticed this issue before, when I was opening the map on app startup (and here it even happened without moving the window). Since I added a short delay, this was resolved. There was no log message in this case.
However, when I noticed that it also happens if I move the window around, I saw that Xcode logs an error:
+[UIView setAnimationsEnabled:] being called from a background thread. Performing any operation from a background thread on UIView or a subclass is not supported and may result in unexpected and insidious behavior. trace=(
0 UIKitCore 0x0000000185824a24 __42+[UIView(Animation) setAnimationsEnabled:]_block_invoke + 112
1 libdispatch.dylib 0x0000000102a327e4 _dispatch_client_callout + 16
2 libdispatch.dylib 0x0000000102a34284 _dispatch_once_callout + 84
3 UIKitCore 0x0000000185824ad8 +[UIView(Animation) performWithoutAnimation:] + 56
4 SwiftUI 0x00000001c68cf1e0 OUTLINED_FUNCTION_136 + 10376
5 SwiftUI 0x00000001c782bebc OUTLINED_FUNCTION_12 + 22864
6 SwiftUI 0x00000001c78285e8 OUTLINED_FUNCTION_12 + 8316
7 SwiftUI 0x00000001c787c288 OUTLINED_FUNCTION_20 + 39264
8 SwiftUI 0x00000001c787c2cc OUTLINED_FUNCTION_20 + 39332
9 UIKitCore 0x000000018582fc24 -[UIView(CALayerDelegate) layoutSublayersOfLayer:] + 1496
10 QuartzCore 0x000000018a05cf00 _ZN2CA5Layer16layout_if_neededEPNS_11TransactionE + 440
11 QuartzCore 0x000000018a068ad0 _ZN2CA5Layer28layout_and_display_if_neededEPNS_11TransactionE + 124
12 QuartzCore 0x0000000189f80498 _ZN2CA7Context18commit_transactionEPNS_11TransactionEdPd + 460
13 QuartzCore 0x0000000189fb00b0 _ZN2CA11Transaction6commitEv + 652
14 VectorKit 0x00000001938ee620 _ZN2md12HoverSupport18updateHoverProxiesERKNSt3__16vectorINS1_10shared_ptrINS_5LabelEEEN3geo12StdAllocatorIS5_N3mdm9AllocatorEEEEE + 2468
15 VectorKit 0x0000000193afd1cc _ZN2md15StandardLabeler16layoutForDisplayERKNS_13LayoutContextE + 156
16 VectorKit 0x0000000193cf133c _ZN2md16CompositeLabeler16layoutForDisplayERKNS_13LayoutContextE + 52
17 VectorKit 0x0000000193abf318 _ZN2md12LabelManager6layoutERKNS_13LayoutContextEPKNS_20CartographicRendererERKNSt3__113unordered_setINS7_10shared_ptrINS_12LabelMapTileEEENS7_4hashISB_EENS7_8equal_toISB_EEN3geo12StdAllocatorISB_N3mdm9AllocatorEEEEERNS_8PassListE + 2904
18 VectorKit 0x0000000193cad464 _ZN2md9realistic16LabelRenderLayer6layoutERKNS_13LayoutContextE + 464
19 VectorKit 0x0000000193658b54 _ZNSt3__110__function6__funcIZN2md9realistic20RealisticRenderLayer5frameERNS2_13LayoutContextEE3$_0NS_9allocatorIS7_EEFvvEEclEv + 180
20 VectorKit 0x00000001936584cc ___ZN3geo9TaskQueue14queueAsyncTaskENSt3__110shared_ptrINS_4TaskEEEPU28objcproto17OS_dispatch_group8NSObject_block_invoke + 80
21 libdispatch.dylib 0x0000000102a30f98 _dispatch_call_block_and_release + 24
22 libdispatch.dylib 0x0000000102a327e4 _dispatch_client_callout + 16
23 libdispatch.dylib 0x0000000102a3aa80 _dispatch_lane_serial_drain + 916
24 libdispatch.dylib 0x0000000102a3b7c4 _dispatch_lane_invoke + 420
25 libdispatch.dylib 0x0000000102a3c794 _dispatch_workloop_invoke + 864
26 libdispatch.dylib 0x0000000102a481a0 _dispatch_root_queue_drain_deferred_wlh + 324
27 libdispatch.dylib 0x0000000102a475fc _dispatch_workloop_worker_thread + 488
28 libsystem_pthread.dylib 0x0000000103b0f924 _pthread_wqthread + 284
29 libsystem_pthread.dylib 0x0000000103b0e6e4 start_wqthread + 8
I disabled all my withAnimation() statements, and the problem persists. I also thought it might be related to my own network fetches, but I think all apply their changes on the main thread. And when I turn on network logging for my own fetching logic, I do not see any data coming in. I also do not think there should be a reason for it.
How can I debug such a situation, so I know, which call actually threw this message? I'd like to know if it is my code or a bug in the SwiftUI map itself.
Apple asked me today to add the compliance information for the Digital Services Act in the EU. I tried to do so, but ran into a major issue here.
When I created the developer account many years ago, it was a personal account used by me as a natural person / freelancer in Germany. When I later founded my US company, I converted the existing developer account into a business account for that company. While doing this, I obtained a DUNS number which is linked to the business address in the States (California).
However, it seems as if this US address never made it into App Store Connect. It still shows my personal address in Germany, which is not correct. I cannot modify it either. The address page says that I have to update it at DUNS. However, in their system, everything is ok.
The problem seems to be related to the transfer of the address data between DUNS and App Store Connect. I opened up a ticket in the DUNS system, but I need to publish a new version of our app soon. So I am wondering if there is a faster way to get this resolved somehow?
I really love the way how you can add SwiftUI views as attachments to a RealityView on visionOS. As I am now porting my app to iOS as well, I was wondering if something like this is possible in ARView as well? I've only seen custom libraries trying to mimic UI elements so far.
With quite some excitement I read about visionOS 2's new feature to automatically turn regular 2D photos into spatial photos, using machine learning. It's briefly mentioned in this WWDC video:
https://developer.apple.com/wwdc24/10166
My question is: Can developers use this feature via an API, so we can turn any image into a spatial image, even if it is not in the device photo library?
We would like to download an image from our server, convert it on the visionPro on-the-fly and display it as a spatial photo.
Since a couple of days (or maybe even weeks), I cannot use App Store Connect in Chrome anymore. I get to the page where the Apps should appear, but there are none. If I open the same page in Safari, it works. But I dislike Safari, since it always asks me for my password each time I visit that site.
I recently had a chat with a company in the manufacturing business. They were asking if Vision Pro could be used to guide maintenance workers through maintenance processes, a use-case that is already established on other platforms. I thought Vision Pro would be perfect for this as well, until I read in this article from Apple that object detection is not supported:
https://developer.apple.com/documentation/visionos/bringing-your-arkit-app-to-visionos#Update-your-interface-to-support-visionOS
To me, this sounds like sacrificing a lot of potential for business scenarios, just for the sake of data privacy. Is this really the case, i.e. is there no way to detect real-world objects and place content on top of them? Image recognition would not be enough in this use-case.
I have implemented a custom view that shows a page in WKWebKit:
import SwiftUI
import WebKit
struct WebView: UIViewRepresentable {
let urlString: String
func makeUIView(context: Context) -> WKWebView {
let webView = WKWebView()
webView.navigationDelegate = context.coordinator
return webView
}
func updateUIView(_ uiView: WKWebView, context: Context) {
if let url = URL(string: urlString) {
let request = URLRequest(url: url)
uiView.load(request)
}
}
func makeCoordinator() -> Coordinator {
Coordinator(self)
}
class Coordinator: NSObject, WKNavigationDelegate {
var parent: WebView
init(_ parent: WebView) {
self.parent = parent
}
}
}
It works, but it shows a grey button in the upper left with no icon. If I click on that button, nothing happens. But I can see this error message in the Xcode logs:
Trying to convert coordinates between views that are in different UIWindows, which isn't supported. Use convertPoint:fromCoordinateSpace: instead.
What is this button, and how can I get rid of it?
As a second question: I also tried to spawn Safari in a separate window, using this view:
import SafariServices
import SwiftUI
struct SafariView: UIViewControllerRepresentable {
let url: URL
func makeUIViewController(context: Context) -> SFSafariViewController {
return SFSafariViewController(url: url)
}
func updateUIViewController(_ uiViewController: SFSafariViewController, context: Context) {
// No update logic needed for a simple web view
}
}
This works, but Safari shows up behind the view that is including the Safari view. Instead, I would want Safari to show up in front - or even better: next to my main view (either left or right). Is there a way to do this?
I am trying to build a visionOS app that uses a map as a central user interface.
This works fine on high zoom levels when there are only a couple of markers present. But as soon as I zoom out and the markers number gets to hundreds or even thousands, the performance gets super, super bad. It takes seconds for the map to render, and pans are also laggy. What makes things worse is that the SwiftUI map does not support clustering yet.
Has anyone found a solution to this?
I found this example by Apple about how to implement clustering:
https://developer.apple.com/documentation/mapkit/mkannotationview/decluttering_a_map_with_mapkit_annotation_clustering
It works, but it's using UIKit and storyboards and I could not get it transformed into SwiftUI compatible code.
I also found this blog post that created a neat SwiftUI integration for a clusterable map:
https://www.linkedin.com/pulse/map-clustering-swiftui-dmitry-%D0%B2%D0%B5l%D0%BEv-j3x7f/
However, I wasn't able to adapt it so the map would update itself in a reactive way. I want to retrieve new data from our server if the user changes the visible region of the map and zooms in or out. I have no clue how to transfer my .onChange(of:) and .onMapCameraChange() modifiers to the UIKit world.
I noticed a weird tab view display bug on visionOS if the tab labels are changed at runtime, e.g. to switch from one locale to another.
If the longest label on the tabs is smaller than the previous longest tab label, the tab ornament's width shrinks and thus the texts and icons can become barely visible, even if the tab labels are not being displayed.
If the longest tab label gets longer, however, additional padding is added.
It seems as if the calculation for the tab width does not take dynamic changes into account.
Is there a workaround for this behavior?
Our app needs to scan QR codes (or a similar mechanism) to populate it with content the user wants to see.
Is there any update on QR code scanning availability on this platform? I asked this before, but never got any feedback.
I know that there is no way to access the camera (which is an issue in itself), but at least the system could provide an API to scan codes.
(It would be also cool if we were able to use the same codes Vision Pro uses for detecting the Zeiss glasses, as long as we could create these via server-side JavaScript code.)
We are building an app that uses ARKit occasionally, but not always.
We would like to test the non-ARKit parts in the simulator, since it offers more debugging features (e.g. SwiftUI previews or the Thread Sanitizer).
However, we can't even build the app for the simulator, since the simulator SDK does not know about certain classes (e.g. "AnchorEntity"). This also means that none of the SwiftUI previews work, even if the views are not using ARKit.
What is the best approach to test such an app in the simulator, without using any ARKit features?
I just recently saw a message in the Unity forums, by a Unity staff member, that Apple requires an Apple Silicon based Mac (M1, M2) in order to build apps for the Vision Pro glasses. This confused me since the simulator works just fine on my Intel Mac. Is there any official statement from Apple on this? It would be weird to buy a new Mac just because of this.
Our iOS app relies heavily on the ability to place objects in arbitrary locations, and we would like to know if this is possible on visionOS as well.
It should work like this: The user faces into a certain direction. We place an object approx. 5m in front of the user. The object then gets pinned to this position (in air) and won't move any more. It should not be anchored to a real-world item like a wall, the floor or a desk.
Placing the object should even work, if the user looks down while placing the object. The object should then appear 5m in front of him once he looks up.
On iOS, we implemented this using Unity and AR Foundation on iOS. For visionOS, we haven't decided yet if we go for native instead. So, if that's only possible using native code, that's also fine.