I am trying to create a Map with markers that can be tapped on. It also should be possible to create markers by tapping on a location in the map.
Adding a tap gesture to the map works. However, if I place an image as an annotation (marker) and add a tap gesture to it, this tap will not be recognized. Also, the tap gesture of the underlying map fires.
How can I
a) react on annotation / marker taps
b) prevent that the underlying map receives a tap as well (i.e. how can I prevent event bubbling)
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Just walked through the order process to realize that I can't get them because my glasses have a prism. Come on, Apple, are you kidding???
I'm experimenting with MapKit on visionOS and I would like to try out different locations. However, I cannot find a way to simulate them. Neither setting a location in Xcode nor setting it in the Simulator would work.
I tap on the "MapUserLocationButton", I get an error message:
CLLocationManager(<CLLocationManager: 0x600000008a10>) for <MKCoreLocationProvider: 0x60000302a400> did fail with error: Error Domain=kCLErrorDomain Code=1 "(null)"
Also, if I try to add the MapCompass and the MapScaleView to .mapControls, this does not have an effect. Which is a pity, since map scaling does not work very well using a mouse in the simulator. How can I get these controls to work?
Last but not least, the MapUserLocationButton shows up in the very upper right and is cut off a bit, so I would love to pad it. But .padding does not have an effect either.
I have implemented a custom view that shows a page in WKWebKit:
import SwiftUI
import WebKit
struct WebView: UIViewRepresentable {
let urlString: String
func makeUIView(context: Context) -> WKWebView {
let webView = WKWebView()
webView.navigationDelegate = context.coordinator
return webView
}
func updateUIView(_ uiView: WKWebView, context: Context) {
if let url = URL(string: urlString) {
let request = URLRequest(url: url)
uiView.load(request)
}
}
func makeCoordinator() -> Coordinator {
Coordinator(self)
}
class Coordinator: NSObject, WKNavigationDelegate {
var parent: WebView
init(_ parent: WebView) {
self.parent = parent
}
}
}
It works, but it shows a grey button in the upper left with no icon. If I click on that button, nothing happens. But I can see this error message in the Xcode logs:
Trying to convert coordinates between views that are in different UIWindows, which isn't supported. Use convertPoint:fromCoordinateSpace: instead.
What is this button, and how can I get rid of it?
As a second question: I also tried to spawn Safari in a separate window, using this view:
import SafariServices
import SwiftUI
struct SafariView: UIViewControllerRepresentable {
let url: URL
func makeUIViewController(context: Context) -> SFSafariViewController {
return SFSafariViewController(url: url)
}
func updateUIViewController(_ uiViewController: SFSafariViewController, context: Context) {
// No update logic needed for a simple web view
}
}
This works, but Safari shows up behind the view that is including the Safari view. Instead, I would want Safari to show up in front - or even better: next to my main view (either left or right). Is there a way to do this?
We're a US company but have a founder who's on a longer trip abroad (digital nomading), not expected to come back the States soon. So we wanted to order the Vision Pro and ship it to him.
However, Zeiss does not accept prescriptions from abroad. How can this be resolved? I've seen quite a number of folks from Germany already using the Vision Pro, so there must be a way to get around this limitation somehow.
In the HelloWorld sample, there is an immersive view with a globe in it. It spins, but the user cannot spin it themselves.
I have looked at the volumetric window, where the globe can be interacted with, but if I understand it correctly, this works because the whole RealityView is being rotated if the user performs a drag gesture.
How could the same be accomplished for an entity inside a RealityView, in this case the globe inside the immersive view? If I just apply the dragRotation modifier, it will rotate the entire RealityView, which yields a strange result, as the globe is not centered on the world origin here, so it spins around the users head.
Is there a way to either translate the entire RealityView and then spin it, or just spin an entity inside it (the globe) on user interaction?
In Unity, I would just use another gameobject as a parent to the globe, translate it, and let the user spin it.
I want to build a panorama sphere around the user. The idea is that the users can interact with this panorama, i.e. pan it around and select markers placed on it, like on a map.
So I set up a sphere that works like a skybox, and inverted its normal, which makes the material is inward facing, using this code I found online:
import Combine
import Foundation
import RealityKit
import SwiftUI
extension Entity {
func addSkybox(for skybox: Skybox) {
let subscription = TextureResource
.loadAsync(named: skybox.imageName)
.sink(receiveCompletion: { completion in
switch completion {
case .finished: break
case let .failure(error): assertionFailure("\(error)")
}
}, receiveValue: { [weak self] texture in
guard let self = self else { return }
var material = UnlitMaterial()
material.color = .init(texture: .init(texture))
let sphere = ModelComponent(mesh: .generateSphere(radius: 5), materials: [material])
self.components.set(sphere)
/// flip sphere inside out so the texture is inside
self.scale *= .init(x: -1, y: 1, z: 1)
self.transform.translation += SIMD3(0.0, 1.0, 0.0)
})
components.set(Entity.SubscriptionComponent(subscription: subscription))
}
struct SubscriptionComponent: Component {
var subscription: AnyCancellable
}
}
This works fine and is looking awesome.
However, I can't get a gesture work on this.
If the sphere is "normally" oriented, i.e. the user drags it "from the outside", I can do it like this:
import RealityKit
import SwiftUI
struct ImmersiveMap: View {
@State private var rotationAngle: Float = 0.0
var body: some View {
RealityView { content in
let rootEntity = Entity()
rootEntity.addSkybox(for: .worldmap)
rootEntity.components.set(CollisionComponent(shapes: [.generateSphere(radius: 5)]))
rootEntity.generateCollisionShapes(recursive: true)
rootEntity.components.set(InputTargetComponent())
content.add(rootEntity)
}
.gesture(DragGesture().targetedToAnyEntity().onChanged({ _ in
log("drag gesture")
}))
But if the user drags it from the inside (i.e. the negative x scale is in place), I get no drag events.
Is there a way to achieve this?
I'd like to place a search bar of top of the main window of my visionOS app. It should look similar to Safari's search bar, and also show search results as the user types. How can this be accomplished?
I noticed that the keyboard behaves pretty strangely in the visionOS simulator.
We tried to add a search bar to the top of our app (ornament), including a search field. As soon as the user starts typing, the keyboard disapppears. This is not happening in Safari, so I wondering what goes wrong in our app?
On our login screen, if the user presses Tab on the keyboard to get to the next field, the keyboard opens and closes again and again, so I have to restart the simulator to be able to login again. Only if I click into the fields directly, it works fine.
I am wondering if we're doing something wrong here, or if this is just a bug in the simulator and will be gone on a real device?
We are porting a iOS Unity AR app to native visionOS.
Ideally, we want to re-use our AR models in both applications. These AR models are rather simple. But still, converting them manually would be time-consuming, especially when it gets to the shaders.
Is anyone aware of any attempts to write conversion tools for this? Maybe in other ecosystems like Godot or Unreal, where folks also want to convert the proprietary Unity format to something else?
I've seen there's an FBX converter, but this would not care for shaders or particles.
I am basically looking for something like the Polyspatial-internal conversion tools, but without the heavy weight of all the rest of Unity. Alternatively, is there a way to export a Unity project to visionOS and then just take the models out of the Xcode project?
I have an eye condition where my left eye is not really looking straight forward. I guess this is what makes my Vision Pro think that I am looking in a different direction (if I try typing on the keyboards, I often miss a key).
So I am wondering if there is a way to set it up to use only one eye as a reference? I am using only one eye anyway, because I do not have stereo vision either.
I am trying to get image tracking working on visionOS, but the documentation is pretty poor. It does not show how the SwiftUI setup should look like, and also how the reference images can be provided.
For the latter question: I tried to just add a folder to my Assets and use this as the reference image group, but ImageTracker did not find it.
I've seen that the ImageTrackingProvider allows to set the tracked images in init. But how can I add images afterwards? We have an application that loads the images dynamically at runtime.
Our app needs the location of the current user. I was able to grant access and the authorization status is 4 (= when in use). Despite of that, retrieving the location fails at almost all times. It returns the error:
The operation couldn’t be completed. (kCLErrorDomain error 1.)
It happens in both the simulator and on the real device. On the simulator, I can sometimes trick the location to be detected by forcing a debug location in Xcode. But this does not work on the real device.
What might be the root cause of this behavior?
We have an existing AR app built for Android and iOS, using Unity. We now want to add a visionOS version of this app. However, this version is built natively, using Xcode directly, no Unity involved.
I saw that I can add a new platform to my app in App Store Connect. But can I upload two different builds, and how will App Store Connect tell which uploaded bundle belongs to which platform?