I've created an app for visionOS that uses a custom package that includes RealityKitContent as well (as a sub-package). I now want to turn this app into a multi-platform app that also supports iOS.
When I try to compile the app for this platform, I get this error message:
Building for 'iphoneos', but realitytool only supports [xros, xrsimulator]
Thus, I want to exclude the RealityKitContent from my package for iOS, but I don't really know how. The Apple docs are pretty complicated, and ChatGPT did only give me solutions that did not work at all.
I also tried to post this on the Swift forum, but no-one could help me there either - so I am trying my luck here.
Here is my Package.swift file:
// swift-tools-version: 5.10
import PackageDescription
let package = Package(
name: "Overlays",
platforms: [
.iOS(.v17), .visionOS(.v1)
],
products: [
.library(
name: "Overlays",
targets: ["Overlays"]),
],
dependencies: [
.package(
path: "../BackendServices"
),
.package(
path: "../MeteorDDP"
),
.package(
path: "Packages/OverlaysRealityKitContent"
),
],
targets: [
.target(
name: "Overlays",
dependencies: ["BackendServices", "MeteorDDP", "OverlaysRealityKitContent"]
),
.testTarget(
name: "OverlaysTests",
dependencies: ["Overlays"]),
]
)
Based on a recommendation in the Swift forum, I also tried this:
dependencies: [
...
.package(
name: "OverlaysRealityKitContent",
path: "Packages/OverlaysRealityKitContent"
),
],
targets: [
.target(
name: "Overlays",
dependencies: [
"BackendServices", "MeteorDDP",
.product(name: "OverlaysRealityKitContent", package: "OverlaysRealityKitContent", condition: .when(platforms: [.visionOS]))
]
),
...
]
but this won't work either.
The problem seems to be that the package is listed under dependencies, which makes the realitytool kick in. Is there a way to avoid this? I definitely need the RealityKitContent package being part of the Overlay package, since the latter depends on the content (on visionOS). And I would not want to split the package up in two parts (one for iOS and one for visionOS), if possible.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I created an app for visionOS, using Reality Composer Pro. Now I want to turn this app into a multi-platform app for iOS as well.
RCP files are not supported on iOS, however. So I tried to use the "old" Reality Composer instead, but that doesn't seem to work either. Xcode 15 does not include it anymore, and I read online that files created with Xcode 14's Reality Composer cannot be included in Xcode 15 files. Also, Xcode 14 does not run on my M3 Mac with Sonoma.
That's a bummer. What is the recommended way to include 3D content in apps that support visionOS AND iOS?!
(I also read that a solution might be using USDZ for both. But how would that workflow look like? Are there samples out there that support both platforms? Please note that I want to setup the anchors myself, using code. I just need the composing tool to the create 3D content that will be placed on these anchors.)
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
iOS
Reality Composer
Reality Composer Pro
visionOS
I'm implementing an AR app with Image Tracking capabilities. I noticed that it takes very long for the entities I want to overlay on a detected image to show up in the video feed.
When debugging using debugOptions.insert(.showAnchorOrigins), I realized that the image is actually detected very quickly, the anchor origins show up almost immediately. And I can also see that my code reacts with adding new anchors for my ModelEntities there.
However, it takes ages for these ModelEntities to actually show up. Only if I move the camera a lot, they will appear after a while.
What might be the reason for this behaviour?
I also noticed that for the first image target, a huge amount of anchors are being created. They start from the image and go all up towards the user. This does not happen with subsequent (other) image targets.
We are developing apps for visionOS and need the following capabilities for a consumer app:
access to the main camera, to let users shoot photos and videos
reading QR codes, to trigger the download of additional content
So I was really happy when I noticed that visionOS 2.0 has these features.
However, I was shocked when I also realized that these capabilities are restricted to enterprise customers only:
https://developer.apple.com/videos/play/wwdc2024/10139/
I think that Apple is shooting itself into the foot with these restrictions. I can understand that privacy is important, but these limitations restrict potential use cases for this platform drastically, even in consumer space.
IMHO Apple should decide if they want to target consumers in the first place, or if they want to go the Hololens / MagicLeap route and mainly satisfy enterprise customers and their respective devs. With the current setup, Apple is risking to push devs away to other platforms where they have more freedom to create great apps.
I noticed that tracking moving images is super slow on visionOS.
Although the anchor update closure is called multiple times per second, the anchor's transform seems to be updated only once in a while. Another issue might be that the SwiftUI isn't updating more often.
On iOS, image tracking is pretty smooth.
Is there a way to speed this up somehow on visionOS, too?
Hi, if I run an app on the visionOS simulator, I get tons of "garbage" messages in the Xcode logs. Please find some samples below. Because of these messages, I can hardly see really relevant logs. Is there any way to get rid of these?
[0x109015000] Decoding completed without errors
[0x1028c0000] Decoding: C0 0x01000100 0x00003048 0x22111100 0x00000000 11496
[0x1028c0000] Options: 1x-1 [FFFFFFFF,FFFFFFFF] 00054060
[0x1021f3200] Releasing session
[0x1031dfe00] Options: 1x-1 [FFFFFFFF,FFFFFFFF] 00054060
[0x1058eae00] Releasing session
[0x10609c200] Decoding: C0 0x01000100 0x00003048 0x22111100 0x00000000 10901
[0x1058bde00] Decoding: C0 0x01000100 0x0000304A 0x22111100 0x00000000 20910
[0x1028d5200] Releasing session
[0x1060b3600] Releasing session
[0x10881f400] Decoding completed without errors
[0x1058e2e00] Decoding: C0 0x01000100 0x0000304A 0x22111100 0x00000000 9124
[0x1028d1e00] Decoding: C0 0x01000100 0x0000304A 0x22111100 0x00000000 20778
[0x1031dfe00] Decoding completed without errors
[0x1031fe000] Decoding completed without errors
[0x1058e2e00] Options: 256x256 [FFFFFFFF,FFFFFFFF] 00025060```
I am trying to create a Map with markers that can be tapped on. It also should be possible to create markers by tapping on a location in the map.
Adding a tap gesture to the map works. However, if I place an image as an annotation (marker) and add a tap gesture to it, this tap will not be recognized. Also, the tap gesture of the underlying map fires.
How can I
a) react on annotation / marker taps
b) prevent that the underlying map receives a tap as well (i.e. how can I prevent event bubbling)
Just walked through the order process to realize that I can't get them because my glasses have a prism. Come on, Apple, are you kidding???
I love the new SwiftUI APIs for Apple Maps. However, I am missing (or haven't found) quite a number of features, particularly on visionOS.
Besides an easy way to zoom maps, the most important feature for me is marker clustering. If you have a lot of markers on a map, this is an absolute must.
Is there any way to accomplish this?
Is it possible to show a map (or a WkWebView) in a fully-immersive AR or VR view, so it surrounds the user like a panorama?
I'd like to let the user immersive in one of my views, by projecting its content on the inner side of a sphere surrounding the user. Think of a video player app that surrounds the user with video previews they can select, like a 3D version of the Netflix homescreen. The view should be fully interactable, not just a read-only view.
Is this possible?
I'd like to map a SwiftUI view (in my case: a map) onto a 3D curved plane in immersive view, so user can literally immersive themselves into the map. The user should also be able to interact with the map, by panning it around and selecting markers.
Is this possible?
I noticed that the keyboard behaves pretty strangely in the visionOS simulator.
We tried to add a search bar to the top of our app (ornament), including a search field. As soon as the user starts typing, the keyboard disapppears. This is not happening in Safari, so I wondering what goes wrong in our app?
On our login screen, if the user presses Tab on the keyboard to get to the next field, the keyboard opens and closes again and again, so I have to restart the simulator to be able to login again. Only if I click into the fields directly, it works fine.
I am wondering if we're doing something wrong here, or if this is just a bug in the simulator and will be gone on a real device?
We are porting a iOS Unity AR app to native visionOS.
Ideally, we want to re-use our AR models in both applications. These AR models are rather simple. But still, converting them manually would be time-consuming, especially when it gets to the shaders.
Is anyone aware of any attempts to write conversion tools for this? Maybe in other ecosystems like Godot or Unreal, where folks also want to convert the proprietary Unity format to something else?
I've seen there's an FBX converter, but this would not care for shaders or particles.
I am basically looking for something like the Polyspatial-internal conversion tools, but without the heavy weight of all the rest of Unity. Alternatively, is there a way to export a Unity project to visionOS and then just take the models out of the Xcode project?
I have an eye condition where my left eye is not really looking straight forward. I guess this is what makes my Vision Pro think that I am looking in a different direction (if I try typing on the keyboards, I often miss a key).
So I am wondering if there is a way to set it up to use only one eye as a reference? I am using only one eye anyway, because I do not have stereo vision either.