I just tried out the app "Blue Moon" (Solitaire Game) from the App Store. They managed to add a secondary SwiftUI tutorial view that resides to the left of the main window and is rotated towards the user. How can this be achieved? I tried to use ornaments, but couldn't find a tilting / rotating option.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I would like to create a immersive panorama like the environments where the user can look around 360°, yet interactive, i.e. the user should be able to interact with entities placed on that panorama.
My current approach is to create a sphere around the user and invert the normals, so the texture is placed inwards, towards the user. This works, but open SwiftUI windows show pretty weird behaviors, as described here:
https://developer.apple.com/forums/thread/749956
Windows don't show their handles anymore, and the glass effects do not recognize my sphere but show the world "outside" of it. This is not the case for Apple's environments.
Is there a better way to create a fully immersive sphere around the user?
Is there a way to increase the font size of the user interface of Reality Composer Pro? My eyes are not the best and it's pretty hard to read these tiny fonts, especially in the property inspector.
We want to overlay a SwiftUI attachment on a RealityView, like it is done in the Diorama sample. By default, the attachments seem to be placed centered at their position. However, for our use-case we need to set a different anchor point, so the attachment is always aligned to one of the corners of the attachment view, e.g. the lower left should be aligned with the attachment's position. Is this possible?
I created a native visionOS app which I am now trying to convert into a multi-platform app, so iOS is supported as well.
I also have Swift packages which differ from platform to platform, to handle platform-specific code.
My SwiftUI previews work fine if I just setup visionOS as the target. But as soon as I add iOS 17 (with a minimum deployment of 17), they stop working.
If I try to display them in the canvas, compilation fails and I get errors that my packages require iOS 17, but the device supports iOS 12. Which I never defined anywhere. This even happens if I set the preview to visionOS.
If I run the same setup on a real device or a simulator, everything works just fine. Only the previews are affected by this.
How do the preview device decide which minimum deployment version it should use, and how can I change this?!
Update: This only happens if the app has a package dependency for a Swift package that itself includes a RealityKitContent package as a sub-dependency. I defined to only include this package in visionOS builds, and also the packages themselves define the platform as .visionOS(.v1) If I remove this package completely from "Frameworks, Libraries, and Embedded Content" the previews work again. Re-adding the package results in this weird behavior that the preview canvas thinks it is building for iOS 12.
I just recently saw a message in the Unity forums, by a Unity staff member, that Apple requires an Apple Silicon based Mac (M1, M2) in order to build apps for the Vision Pro glasses. This confused me since the simulator works just fine on my Intel Mac. Is there any official statement from Apple on this? It would be weird to buy a new Mac just because of this.
Our iOS app relies heavily on the ability to place objects in arbitrary locations, and we would like to know if this is possible on visionOS as well.
It should work like this: The user faces into a certain direction. We place an object approx. 5m in front of the user. The object then gets pinned to this position (in air) and won't move any more. It should not be anchored to a real-world item like a wall, the floor or a desk.
Placing the object should even work, if the user looks down while placing the object. The object should then appear 5m in front of him once he looks up.
On iOS, we implemented this using Unity and AR Foundation on iOS. For visionOS, we haven't decided yet if we go for native instead. So, if that's only possible using native code, that's also fine.
In the WWDC23 sessions it was mentioned that the device won't support taking photos or recording videos through the cameras. Which I think is a huge limitation. However, in another forum I read that it actually works, using AVFoundation. So I went back into the docs, and they said it was not possible.
Hence, I am pretty confused. Has anyone tried this out yet and confirm whether camera access is blocked completely or not? For our app, it would be a bummer if it was.
Is it possible to render a Safari-based webview in full immersive space, so an app can show web pages there?
We're an AR startup in the US, but our founders live in Europe. We definitely want to order the VP once it gets available in the States, but I just saw in my mail that Apple requires a US prescription if you wear glasses. This is a bummer for us. We can forward VP to Europe, but we won't be able to travel to the States just to get such a prescription. Why can't Apple just accept any prescription from an optician?!
I'm experimenting with MapKit on visionOS and I would like to try out different locations. However, I cannot find a way to simulate them. Neither setting a location in Xcode nor setting it in the Simulator would work.
I tap on the "MapUserLocationButton", I get an error message:
CLLocationManager(<CLLocationManager: 0x600000008a10>) for <MKCoreLocationProvider: 0x60000302a400> did fail with error: Error Domain=kCLErrorDomain Code=1 "(null)"
Also, if I try to add the MapCompass and the MapScaleView to .mapControls, this does not have an effect. Which is a pity, since map scaling does not work very well using a mouse in the simulator. How can I get these controls to work?
Last but not least, the MapUserLocationButton shows up in the very upper right and is cut off a bit, so I would love to pad it. But .padding does not have an effect either.
We need to debug a website running inside a WkWebView on visionOS. To debug it, I want to connect my desktop Safari to it. However, at least in the simulator there is no option in visionOS' Safari settings to enable Web Debugging. Is this missing, or can it be found elsewhere?
We're a US company but have a founder who's on a longer trip abroad (digital nomading), not expected to come back the States soon. So we wanted to order the Vision Pro and ship it to him.
However, Zeiss does not accept prescriptions from abroad. How can this be resolved? I've seen quite a number of folks from Germany already using the Vision Pro, so there must be a way to get around this limitation somehow.
In the HelloWorld sample, there is an immersive view with a globe in it. It spins, but the user cannot spin it themselves.
I have looked at the volumetric window, where the globe can be interacted with, but if I understand it correctly, this works because the whole RealityView is being rotated if the user performs a drag gesture.
How could the same be accomplished for an entity inside a RealityView, in this case the globe inside the immersive view? If I just apply the dragRotation modifier, it will rotate the entire RealityView, which yields a strange result, as the globe is not centered on the world origin here, so it spins around the users head.
Is there a way to either translate the entire RealityView and then spin it, or just spin an entity inside it (the globe) on user interaction?
In Unity, I would just use another gameobject as a parent to the globe, translate it, and let the user spin it.
I want to build a panorama sphere around the user. The idea is that the users can interact with this panorama, i.e. pan it around and select markers placed on it, like on a map.
So I set up a sphere that works like a skybox, and inverted its normal, which makes the material is inward facing, using this code I found online:
import Combine
import Foundation
import RealityKit
import SwiftUI
extension Entity {
func addSkybox(for skybox: Skybox) {
let subscription = TextureResource
.loadAsync(named: skybox.imageName)
.sink(receiveCompletion: { completion in
switch completion {
case .finished: break
case let .failure(error): assertionFailure("\(error)")
}
}, receiveValue: { [weak self] texture in
guard let self = self else { return }
var material = UnlitMaterial()
material.color = .init(texture: .init(texture))
let sphere = ModelComponent(mesh: .generateSphere(radius: 5), materials: [material])
self.components.set(sphere)
/// flip sphere inside out so the texture is inside
self.scale *= .init(x: -1, y: 1, z: 1)
self.transform.translation += SIMD3(0.0, 1.0, 0.0)
})
components.set(Entity.SubscriptionComponent(subscription: subscription))
}
struct SubscriptionComponent: Component {
var subscription: AnyCancellable
}
}
This works fine and is looking awesome.
However, I can't get a gesture work on this.
If the sphere is "normally" oriented, i.e. the user drags it "from the outside", I can do it like this:
import RealityKit
import SwiftUI
struct ImmersiveMap: View {
@State private var rotationAngle: Float = 0.0
var body: some View {
RealityView { content in
let rootEntity = Entity()
rootEntity.addSkybox(for: .worldmap)
rootEntity.components.set(CollisionComponent(shapes: [.generateSphere(radius: 5)]))
rootEntity.generateCollisionShapes(recursive: true)
rootEntity.components.set(InputTargetComponent())
content.add(rootEntity)
}
.gesture(DragGesture().targetedToAnyEntity().onChanged({ _ in
log("drag gesture")
}))
But if the user drags it from the inside (i.e. the negative x scale is in place), I get no drag events.
Is there a way to achieve this?