Hi, if I run an app on the visionOS simulator, I get tons of "garbage" messages in the Xcode logs. Please find some samples below. Because of these messages, I can hardly see really relevant logs. Is there any way to get rid of these?
[0x109015000] Decoding completed without errors
[0x1028c0000] Decoding: C0 0x01000100 0x00003048 0x22111100 0x00000000 11496
[0x1028c0000] Options: 1x-1 [FFFFFFFF,FFFFFFFF] 00054060
[0x1021f3200] Releasing session
[0x1031dfe00] Options: 1x-1 [FFFFFFFF,FFFFFFFF] 00054060
[0x1058eae00] Releasing session
[0x10609c200] Decoding: C0 0x01000100 0x00003048 0x22111100 0x00000000 10901
[0x1058bde00] Decoding: C0 0x01000100 0x0000304A 0x22111100 0x00000000 20910
[0x1028d5200] Releasing session
[0x1060b3600] Releasing session
[0x10881f400] Decoding completed without errors
[0x1058e2e00] Decoding: C0 0x01000100 0x0000304A 0x22111100 0x00000000 9124
[0x1028d1e00] Decoding: C0 0x01000100 0x0000304A 0x22111100 0x00000000 20778
[0x1031dfe00] Decoding completed without errors
[0x1031fe000] Decoding completed without errors
[0x1058e2e00] Options: 256x256 [FFFFFFFF,FFFFFFFF] 00025060```
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am trying to create a Map with markers that can be tapped on. It also should be possible to create markers by tapping on a location in the map.
Adding a tap gesture to the map works. However, if I place an image as an annotation (marker) and add a tap gesture to it, this tap will not be recognized. Also, the tap gesture of the underlying map fires.
How can I
a) react on annotation / marker taps
b) prevent that the underlying map receives a tap as well (i.e. how can I prevent event bubbling)
Just walked through the order process to realize that I can't get them because my glasses have a prism. Come on, Apple, are you kidding???
I love the new SwiftUI APIs for Apple Maps. However, I am missing (or haven't found) quite a number of features, particularly on visionOS.
Besides an easy way to zoom maps, the most important feature for me is marker clustering. If you have a lot of markers on a map, this is an absolute must.
Is there any way to accomplish this?
Is it possible to show a map (or a WkWebView) in a fully-immersive AR or VR view, so it surrounds the user like a panorama?
I'd like to let the user immersive in one of my views, by projecting its content on the inner side of a sphere surrounding the user. Think of a video player app that surrounds the user with video previews they can select, like a 3D version of the Netflix homescreen. The view should be fully interactable, not just a read-only view.
Is this possible?
I'd like to map a SwiftUI view (in my case: a map) onto a 3D curved plane in immersive view, so user can literally immersive themselves into the map. The user should also be able to interact with the map, by panning it around and selecting markers.
Is this possible?
I noticed that the keyboard behaves pretty strangely in the visionOS simulator.
We tried to add a search bar to the top of our app (ornament), including a search field. As soon as the user starts typing, the keyboard disapppears. This is not happening in Safari, so I wondering what goes wrong in our app?
On our login screen, if the user presses Tab on the keyboard to get to the next field, the keyboard opens and closes again and again, so I have to restart the simulator to be able to login again. Only if I click into the fields directly, it works fine.
I am wondering if we're doing something wrong here, or if this is just a bug in the simulator and will be gone on a real device?
We are porting a iOS Unity AR app to native visionOS.
Ideally, we want to re-use our AR models in both applications. These AR models are rather simple. But still, converting them manually would be time-consuming, especially when it gets to the shaders.
Is anyone aware of any attempts to write conversion tools for this? Maybe in other ecosystems like Godot or Unreal, where folks also want to convert the proprietary Unity format to something else?
I've seen there's an FBX converter, but this would not care for shaders or particles.
I am basically looking for something like the Polyspatial-internal conversion tools, but without the heavy weight of all the rest of Unity. Alternatively, is there a way to export a Unity project to visionOS and then just take the models out of the Xcode project?
I have an eye condition where my left eye is not really looking straight forward. I guess this is what makes my Vision Pro think that I am looking in a different direction (if I try typing on the keyboards, I often miss a key).
So I am wondering if there is a way to set it up to use only one eye as a reference? I am using only one eye anyway, because I do not have stereo vision either.
I am trying to get image tracking working on visionOS, but the documentation is pretty poor. It does not show how the SwiftUI setup should look like, and also how the reference images can be provided.
For the latter question: I tried to just add a folder to my Assets and use this as the reference image group, but ImageTracker did not find it.
I've seen that the ImageTrackingProvider allows to set the tracked images in init. But how can I add images afterwards? We have an application that loads the images dynamically at runtime.
Our app needs the location of the current user. I was able to grant access and the authorization status is 4 (= when in use). Despite of that, retrieving the location fails at almost all times. It returns the error:
The operation couldn’t be completed. (kCLErrorDomain error 1.)
It happens in both the simulator and on the real device. On the simulator, I can sometimes trick the location to be detected by forcing a debug location in Xcode. But this does not work on the real device.
What might be the root cause of this behavior?
We want to use QR code to open and activate certain features in our app.
We don't want these QR codes to be hard-coded in our app (i.e. image tracking). Instead we want to use them like you typically would do with your smart phone camera: Just detect them if the user looks at them.
And if they encode a certain URL pointing to our app, start the app via an URL handler and hand over the link, like it is possible on iOS.
I tried this in normal Shared Space and also with the camera app. But neither recognized a QR code.
Is this feasible with the Vision Pro?
Per default, RealityKit scenes are way too dark. The only solution I could find for this problem so far was to add an image based light to it. But it looks pretty weird, if I use a central IBL. The shadows are way too strong, and if I increase the intensity, the highlights get way too light. If I attach a light to each object, it kinda works, but feels strange to do so. I could not find any option to just setup an ambient light. Isn't there a way to do this, or am I just too dumb?