Post

Replies

Boosts

Views

Activity

How to exclude RealityKitContent from Swift package for iOS?
I've created an app for visionOS that uses a custom package that includes RealityKitContent as well (as a sub-package). I now want to turn this app into a multi-platform app that also supports iOS. When I try to compile the app for this platform, I get this error message: Building for 'iphoneos', but realitytool only supports [xros, xrsimulator] Thus, I want to exclude the RealityKitContent from my package for iOS, but I don't really know how. The Apple docs are pretty complicated, and ChatGPT did only give me solutions that did not work at all. I also tried to post this on the Swift forum, but no-one could help me there either - so I am trying my luck here. Here is my Package.swift file: // swift-tools-version: 5.10 import PackageDescription let package = Package( name: "Overlays", platforms: [ .iOS(.v17), .visionOS(.v1) ], products: [ .library( name: "Overlays", targets: ["Overlays"]), ], dependencies: [ .package( path: "../BackendServices" ), .package( path: "../MeteorDDP" ), .package( path: "Packages/OverlaysRealityKitContent" ), ], targets: [ .target( name: "Overlays", dependencies: ["BackendServices", "MeteorDDP", "OverlaysRealityKitContent"] ), .testTarget( name: "OverlaysTests", dependencies: ["Overlays"]), ] ) Based on a recommendation in the Swift forum, I also tried this: dependencies: [ ... .package( name: "OverlaysRealityKitContent", path: "Packages/OverlaysRealityKitContent" ), ], targets: [ .target( name: "Overlays", dependencies: [ "BackendServices", "MeteorDDP", .product(name: "OverlaysRealityKitContent", package: "OverlaysRealityKitContent", condition: .when(platforms: [.visionOS])) ] ), ... ] but this won't work either. The problem seems to be that the package is listed under dependencies, which makes the realitytool kick in. Is there a way to avoid this? I definitely need the RealityKitContent package being part of the Overlay package, since the latter depends on the content (on visionOS). And I would not want to split the package up in two parts (one for iOS and one for visionOS), if possible.
1
0
1.1k
Jun ’24
Multi-platform app for visionOS and iOS: How to include 3D models for both?
I created an app for visionOS, using Reality Composer Pro. Now I want to turn this app into a multi-platform app for iOS as well. RCP files are not supported on iOS, however. So I tried to use the "old" Reality Composer instead, but that doesn't seem to work either. Xcode 15 does not include it anymore, and I read online that files created with Xcode 14's Reality Composer cannot be included in Xcode 15 files. Also, Xcode 14 does not run on my M3 Mac with Sonoma. That's a bummer. What is the recommended way to include 3D content in apps that support visionOS AND iOS?! (I also read that a solution might be using USDZ for both. But how would that workflow look like? Are there samples out there that support both platforms? Please note that I want to setup the anchors myself, using code. I just need the composing tool to the create 3D content that will be placed on these anchors.)
1
0
979
Jun ’24
RealityKit on iOS: New anchor entity takes ages to show up
I'm implementing an AR app with Image Tracking capabilities. I noticed that it takes very long for the entities I want to overlay on a detected image to show up in the video feed. When debugging using debugOptions.insert(.showAnchorOrigins), I realized that the image is actually detected very quickly, the anchor origins show up almost immediately. And I can also see that my code reacts with adding new anchors for my ModelEntities there. However, it takes ages for these ModelEntities to actually show up. Only if I move the camera a lot, they will appear after a while. What might be the reason for this behaviour? I also noticed that for the first image target, a huge amount of anchors are being created. They start from the image and go all up towards the user. This does not happen with subsequent (other) image targets.
1
0
561
Jun ’24
Apple: please unlock "enterprise features" for all visionOS devs!
We are developing apps for visionOS and need the following capabilities for a consumer app: access to the main camera, to let users shoot photos and videos reading QR codes, to trigger the download of additional content So I was really happy when I noticed that visionOS 2.0 has these features. However, I was shocked when I also realized that these capabilities are restricted to enterprise customers only: https://developer.apple.com/videos/play/wwdc2024/10139/ I think that Apple is shooting itself into the foot with these restrictions. I can understand that privacy is important, but these limitations restrict potential use cases for this platform drastically, even in consumer space. IMHO Apple should decide if they want to target consumers in the first place, or if they want to go the Hololens / MagicLeap route and mainly satisfy enterprise customers and their respective devs. With the current setup, Apple is risking to push devs away to other platforms where they have more freedom to create great apps.
1
3
740
Jun ’24
Tracking of moving images is super slow on visionOS
I noticed that tracking moving images is super slow on visionOS. Although the anchor update closure is called multiple times per second, the anchor's transform seems to be updated only once in a while. Another issue might be that the SwiftUI isn't updating more often. On iOS, image tracking is pretty smooth. Is there a way to speed this up somehow on visionOS, too?
1
0
707
Jun ’24
Lots of "garbage" in the Xcode logs, like "Decoding completed without errors"
Hi, if I run an app on the visionOS simulator, I get tons of "garbage" messages in the Xcode logs. Please find some samples below. Because of these messages, I can hardly see really relevant logs. Is there any way to get rid of these? [0x109015000] Decoding completed without errors [0x1028c0000] Decoding: C0 0x01000100 0x00003048 0x22111100 0x00000000 11496 [0x1028c0000] Options: 1x-1 [FFFFFFFF,FFFFFFFF] 00054060 [0x1021f3200] Releasing session [0x1031dfe00] Options: 1x-1 [FFFFFFFF,FFFFFFFF] 00054060 [0x1058eae00] Releasing session [0x10609c200] Decoding: C0 0x01000100 0x00003048 0x22111100 0x00000000 10901 [0x1058bde00] Decoding: C0 0x01000100 0x0000304A 0x22111100 0x00000000 20910 [0x1028d5200] Releasing session [0x1060b3600] Releasing session [0x10881f400] Decoding completed without errors [0x1058e2e00] Decoding: C0 0x01000100 0x0000304A 0x22111100 0x00000000 9124 [0x1028d1e00] Decoding: C0 0x01000100 0x0000304A 0x22111100 0x00000000 20778 [0x1031dfe00] Decoding completed without errors [0x1031fe000] Decoding completed without errors [0x1058e2e00] Options: 256x256 [FFFFFFFF,FFFFFFFF] 00025060```
0
1
426
Jan ’24
visionOS: How to tap annotations on a map?
I am trying to create a Map with markers that can be tapped on. It also should be possible to create markers by tapping on a location in the map. Adding a tap gesture to the map works. However, if I place an image as an annotation (marker) and add a tap gesture to it, this tap will not be recognized. Also, the tap gesture of the underlying map fires. How can I a) react on annotation / marker taps b) prevent that the underlying map receives a tap as well (i.e. how can I prevent event bubbling)
0
0
427
Jan ’24
Weird behaviour of the keyboard in the visionOS simulator
I noticed that the keyboard behaves pretty strangely in the visionOS simulator. We tried to add a search bar to the top of our app (ornament), including a search field. As soon as the user starts typing, the keyboard disapppears. This is not happening in Safari, so I wondering what goes wrong in our app? On our login screen, if the user presses Tab on the keyboard to get to the next field, the keyboard opens and closes again and again, so I have to restart the simulator to be able to login again. Only if I click into the fields directly, it works fine. I am wondering if we're doing something wrong here, or if this is just a bug in the simulator and will be gone on a real device?
0
0
382
Mar ’24
Converting a Unity model / prefab to UDZ
We are porting a iOS Unity AR app to native visionOS. Ideally, we want to re-use our AR models in both applications. These AR models are rather simple. But still, converting them manually would be time-consuming, especially when it gets to the shaders. Is anyone aware of any attempts to write conversion tools for this? Maybe in other ecosystems like Godot or Unreal, where folks also want to convert the proprietary Unity format to something else? I've seen there's an FBX converter, but this would not care for shaders or particles. I am basically looking for something like the Polyspatial-internal conversion tools, but without the heavy weight of all the rest of Unity. Alternatively, is there a way to export a Unity project to visionOS and then just take the models out of the Xcode project?
0
0
827
Mar ’24
Is it possible to tell Vision Pro to use just one eye for eye tracking?
I have an eye condition where my left eye is not really looking straight forward. I guess this is what makes my Vision Pro think that I am looking in a different direction (if I try typing on the keyboards, I often miss a key). So I am wondering if there is a way to set it up to use only one eye as a reference? I am using only one eye anyway, because I do not have stereo vision either.
0
0
333
Mar ’24