I am trying to get image tracking working on visionOS, but the documentation is pretty poor. It does not show how the SwiftUI setup should look like, and also how the reference images can be provided.
For the latter question: I tried to just add a folder to my Assets and use this as the reference image group, but ImageTracker did not find it.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I've seen that the ImageTrackingProvider allows to set the tracked images in init. But how can I add images afterwards? We have an application that loads the images dynamically at runtime.
Our app needs the location of the current user. I was able to grant access and the authorization status is 4 (= when in use). Despite of that, retrieving the location fails at almost all times. It returns the error:
The operation couldn’t be completed. (kCLErrorDomain error 1.)
It happens in both the simulator and on the real device. On the simulator, I can sometimes trick the location to be detected by forcing a debug location in Xcode. But this does not work on the real device.
What might be the root cause of this behavior?
We want to use QR code to open and activate certain features in our app.
We don't want these QR codes to be hard-coded in our app (i.e. image tracking). Instead we want to use them like you typically would do with your smart phone camera: Just detect them if the user looks at them.
And if they encode a certain URL pointing to our app, start the app via an URL handler and hand over the link, like it is possible on iOS.
I tried this in normal Shared Space and also with the camera app. But neither recognized a QR code.
Is this feasible with the Vision Pro?
Per default, RealityKit scenes are way too dark. The only solution I could find for this problem so far was to add an image based light to it. But it looks pretty weird, if I use a central IBL. The shadows are way too strong, and if I increase the intensity, the highlights get way too light. If I attach a light to each object, it kinda works, but feels strange to do so. I could not find any option to just setup an ambient light. Isn't there a way to do this, or am I just too dumb?
I just tried out the app "Blue Moon" (Solitaire Game) from the App Store. They managed to add a secondary SwiftUI tutorial view that resides to the left of the main window and is rotated towards the user. How can this be achieved? I tried to use ornaments, but couldn't find a tilting / rotating option.
I would like to create a immersive panorama like the environments where the user can look around 360°, yet interactive, i.e. the user should be able to interact with entities placed on that panorama.
My current approach is to create a sphere around the user and invert the normals, so the texture is placed inwards, towards the user. This works, but open SwiftUI windows show pretty weird behaviors, as described here:
https://developer.apple.com/forums/thread/749956
Windows don't show their handles anymore, and the glass effects do not recognize my sphere but show the world "outside" of it. This is not the case for Apple's environments.
Is there a better way to create a fully immersive sphere around the user?
Is there a way to increase the font size of the user interface of Reality Composer Pro? My eyes are not the best and it's pretty hard to read these tiny fonts, especially in the property inspector.
We want to overlay a SwiftUI attachment on a RealityView, like it is done in the Diorama sample. By default, the attachments seem to be placed centered at their position. However, for our use-case we need to set a different anchor point, so the attachment is always aligned to one of the corners of the attachment view, e.g. the lower left should be aligned with the attachment's position. Is this possible?
I created a native visionOS app which I am now trying to convert into a multi-platform app, so iOS is supported as well.
I also have Swift packages which differ from platform to platform, to handle platform-specific code.
My SwiftUI previews work fine if I just setup visionOS as the target. But as soon as I add iOS 17 (with a minimum deployment of 17), they stop working.
If I try to display them in the canvas, compilation fails and I get errors that my packages require iOS 17, but the device supports iOS 12. Which I never defined anywhere. This even happens if I set the preview to visionOS.
If I run the same setup on a real device or a simulator, everything works just fine. Only the previews are affected by this.
How do the preview device decide which minimum deployment version it should use, and how can I change this?!
Update: This only happens if the app has a package dependency for a Swift package that itself includes a RealityKitContent package as a sub-dependency. I defined to only include this package in visionOS builds, and also the packages themselves define the platform as .visionOS(.v1) If I remove this package completely from "Frameworks, Libraries, and Embedded Content" the previews work again. Re-adding the package results in this weird behavior that the preview canvas thinks it is building for iOS 12.