In the WWDC23 sessions it was mentioned that the device won't support taking photos or recording videos through the cameras. Which I think is a huge limitation. However, in another forum I read that it actually works, using AVFoundation. So I went back into the docs, and they said it was not possible.
Hence, I am pretty confused. Has anyone tried this out yet and confirm whether camera access is blocked completely or not? For our app, it would be a bummer if it was.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Is it possible to render a Safari-based webview in full immersive space, so an app can show web pages there?
We're an AR startup in the US, but our founders live in Europe. We definitely want to order the VP once it gets available in the States, but I just saw in my mail that Apple requires a US prescription if you wear glasses. This is a bummer for us. We can forward VP to Europe, but we won't be able to travel to the States just to get such a prescription. Why can't Apple just accept any prescription from an optician?!
I'm experimenting with MapKit on visionOS and I would like to try out different locations. However, I cannot find a way to simulate them. Neither setting a location in Xcode nor setting it in the Simulator would work.
I tap on the "MapUserLocationButton", I get an error message:
CLLocationManager(<CLLocationManager: 0x600000008a10>) for <MKCoreLocationProvider: 0x60000302a400> did fail with error: Error Domain=kCLErrorDomain Code=1 "(null)"
Also, if I try to add the MapCompass and the MapScaleView to .mapControls, this does not have an effect. Which is a pity, since map scaling does not work very well using a mouse in the simulator. How can I get these controls to work?
Last but not least, the MapUserLocationButton shows up in the very upper right and is cut off a bit, so I would love to pad it. But .padding does not have an effect either.
We need to debug a website running inside a WkWebView on visionOS. To debug it, I want to connect my desktop Safari to it. However, at least in the simulator there is no option in visionOS' Safari settings to enable Web Debugging. Is this missing, or can it be found elsewhere?
We're a US company but have a founder who's on a longer trip abroad (digital nomading), not expected to come back the States soon. So we wanted to order the Vision Pro and ship it to him.
However, Zeiss does not accept prescriptions from abroad. How can this be resolved? I've seen quite a number of folks from Germany already using the Vision Pro, so there must be a way to get around this limitation somehow.
In the HelloWorld sample, there is an immersive view with a globe in it. It spins, but the user cannot spin it themselves.
I have looked at the volumetric window, where the globe can be interacted with, but if I understand it correctly, this works because the whole RealityView is being rotated if the user performs a drag gesture.
How could the same be accomplished for an entity inside a RealityView, in this case the globe inside the immersive view? If I just apply the dragRotation modifier, it will rotate the entire RealityView, which yields a strange result, as the globe is not centered on the world origin here, so it spins around the users head.
Is there a way to either translate the entire RealityView and then spin it, or just spin an entity inside it (the globe) on user interaction?
In Unity, I would just use another gameobject as a parent to the globe, translate it, and let the user spin it.
I want to build a panorama sphere around the user. The idea is that the users can interact with this panorama, i.e. pan it around and select markers placed on it, like on a map.
So I set up a sphere that works like a skybox, and inverted its normal, which makes the material is inward facing, using this code I found online:
import Combine
import Foundation
import RealityKit
import SwiftUI
extension Entity {
func addSkybox(for skybox: Skybox) {
let subscription = TextureResource
.loadAsync(named: skybox.imageName)
.sink(receiveCompletion: { completion in
switch completion {
case .finished: break
case let .failure(error): assertionFailure("\(error)")
}
}, receiveValue: { [weak self] texture in
guard let self = self else { return }
var material = UnlitMaterial()
material.color = .init(texture: .init(texture))
let sphere = ModelComponent(mesh: .generateSphere(radius: 5), materials: [material])
self.components.set(sphere)
/// flip sphere inside out so the texture is inside
self.scale *= .init(x: -1, y: 1, z: 1)
self.transform.translation += SIMD3(0.0, 1.0, 0.0)
})
components.set(Entity.SubscriptionComponent(subscription: subscription))
}
struct SubscriptionComponent: Component {
var subscription: AnyCancellable
}
}
This works fine and is looking awesome.
However, I can't get a gesture work on this.
If the sphere is "normally" oriented, i.e. the user drags it "from the outside", I can do it like this:
import RealityKit
import SwiftUI
struct ImmersiveMap: View {
@State private var rotationAngle: Float = 0.0
var body: some View {
RealityView { content in
let rootEntity = Entity()
rootEntity.addSkybox(for: .worldmap)
rootEntity.components.set(CollisionComponent(shapes: [.generateSphere(radius: 5)]))
rootEntity.generateCollisionShapes(recursive: true)
rootEntity.components.set(InputTargetComponent())
content.add(rootEntity)
}
.gesture(DragGesture().targetedToAnyEntity().onChanged({ _ in
log("drag gesture")
}))
But if the user drags it from the inside (i.e. the negative x scale is in place), I get no drag events.
Is there a way to achieve this?
I'd like to place a search bar of top of the main window of my visionOS app. It should look similar to Safari's search bar, and also show search results as the user types. How can this be accomplished?
I would like to add text to a Reality Composer Pro scene and set the actual text via code. How can I achieve this? I haven't seen any "Text" element in the editor.
We have an existing AR app built for Android and iOS, using Unity. We now want to add a visionOS version of this app. However, this version is built natively, using Xcode directly, no Unity involved.
I saw that I can add a new platform to my app in App Store Connect. But can I upload two different builds, and how will App Store Connect tell which uploaded bundle belongs to which platform?
I setup an entity with a collision component on it. But it was hard to target the object for I tap gesture, until I increased the radius quite a bit. Now I am unsure if it is too large. Is there a way to visualize these components somehow, maybe even in a running scene?
Also, I find it pretty confusing that the size is given in cm. This made me wonder if this cm setting is affected by the entity's size at all? In Unity, it's just (local) "units".
I wanted to create a particle effect using particle images I copied from a Unity project. These images are PNGs with an alpha channel. In Unity, these look georgeous, but on visionOS, they look rather weird, since the alpha channel is not respected. All pixel which are not pitch black are full white. Is there a way to change this behavior?
I have a window with an ornament to the right. This works fine in the shared space, the ornament fades out nicely when being hidden. If I display the same window in an Immersive Space, however, a "cut-out" into the real world behind the Immersive Space appears once fading starts. This looks pretty weird. Is there a way to workaround this?
On iOS, Sign in with Apple will provide an e-mail address if the user is logging in for the first time. On all subsequent logins, the e-mail address will be missing. However, this can be reset by removing the app from your Apple ID. If you then try to login again, the e-mail dialog will popup again, and the app will receive this e-mail.
On visionOS, however, the latter does not happen. Even if I have removed the app from my Apple ID, the e-mail dialog won't show up again. The only way to resolve this is to reset the visionOS simulator (haven't tried it on a real device).