So, I've got a SwiftUI app with a triple column splitView.
When the app starts on an 11 inch iPad, the "primary" column is offscreen.
The primaryColumn has a List full of navigationLinks.
Like so:
List {
ForEach(items, id: \.itemID) { item in
NavigationLink(tag: item, selection: $selectedItem) {
ItemDetailsView(item: item)
...
Now, the selection of the first Column in the split view cascades through the rest of the app, so populating it is pretty important.
I've tried having the selectedItem be set from an EnvironmentObject. I've also tried having it set in onAppear.
Everything I try only causes a selection the "pop into place" whenever I expose the primary column of the sidebar.
Am I going about this the wrong way?
Is it because the sidebar is hidden by default?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Asking with the WWDC-10220 tag because Fruta is the sample code for this presentation.
When launching Fruta on a landscape iPad Pro 11", the "State" of the application is completely empty until the user taps the back button. After tapping back everything appears to pop into place.
Is this expected behavior in SwiftUI when using split screens?
NavigationLinks are finicky and I'm expecting the programmatic setting of the primary column "selection" to be resolved on launch, not when the user taps back.
Hello,
Im noticing a behavior when I try to send a package file as a resource to peer.
A file like this is basically a folder with an extension, and, and despite receiving a progress object from the send call, I’m not seeing it rise past 0.0%. Attaching it to a ProgressView also does not show any progress.
The completion handler of the send is never called with an error and the receiver will only get the “finished receiving” callback if the host cancels or disconnects.
I didn’t see anything in the sendResource documentation about not supporting bundle files, but it would not surprise me.
Any thoughts?
I've noticed that the bounds of my ModelEntity is not impacted when I transform one of the mesh's Joints.
I've attached an image those demonstrates this. It appears it will cause issues with the collision bounding box as well.
Is this a bug or user error? Not many of this framework's methods are documented.
World Anchor from SpatialTapGesture ??
At 19:56 in the video, it's mentioned that we can use a SpatialTapGesture to "identify a position in the world" to make a world anchor.
Which API calls are utilized to make this happen?
World anchors are created with 4x4 matrices, and a SpatialTapGestures doesn't seem to generate one of those.
Any ideas?
Hello,
With the advent of widget interactivity, in order to support state management, I'd like to differentiate one widget from another, even if they share the same configuration.
Is this possible? Many of my search results are turning up iOS 15 era information, and I am not sure if that's still valid.
Thank you
On Xcode 15.1.0b2 when rayacsting to a collision surface, there appears to be a tendency for the collisions to be inconsistent.
Here are my results. Green cylinders are hits, and red cylinders are raycasts that returned no collision results.
NOTE: This raycast is triggered by a tap gesture recognizer registering on the cube... so it's weird to me that the tap would work, but the raycast not collide with anything.
Is this something that just performs poorly in the simulator?
My RayCasting command is:
guard let pose = self.arSessionController.worldTracking.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else {
print("FAILED TO GET POSITION")
return
}
let transform = Transform(matrix: pose.originFromAnchorTransform)
let locationOfDevice = transform.translation
let raycastResult = scene.raycast(from: locationOfDevice, to: destination, relativeTo: nil)
where destination is retrieved in a tap gesture handler via:
let worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene)
Any findings would be appreciated.
Is it possible to edit a SwiftData document in an immersive scene? If so... how?
At the moment I see that the modelContext is available in the contentView of a documentGroup, but can Document Data be made available to an Immserive scene's content?
Hello,
I’ve got a few questions about drag gestures on VisionOS in Immersive scenes.
Once a user initiates a drag gesture are their eyes involved anymore in the gesture?
If not and the user is dragging something farther away, how far can they move it using indirect gestures? I assume the user’s range of motion is limited because their hands are in their lap, so could they move something multiple meters along a distant wall?
How can the user cancel the gesture If they don’t like the anticipated / telegraphed result?
I’m trying to craft a good experience and it’s difficult without some of these details. I have still not heard back on my devkit application.
Thank you for any help.
I want to place a ModelEntity at an AnchorEntity's location, but not as a child of the AnchorEntity. ( I want to be able to raycast to it, and have collisions work.)
I've placed an AnchorEntity in my scene like so:
AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous)
In my RealityView update closure, I print out this entity's position relative to "nil" like so:
wallAnchor.position(relativeTo: nil)
Unfortunately, this position doesn't make sense. It's very close to zero, even though it appears several meters away.
I believe this is because AnchorEntities have their own self contained coordinate spaces that are independent from the scene's coordinate space, and it is reporting its position relative to its own coordinate space.
How can I bridge the gap between these two?
WorldAnchor has an originFromAnchorTransform property that helps with this, but I'm not seeing something similar for AnchorEntity.
Thank you
How persistent is the storage of the WorldTrackingProvider and its underlying world map reconstruction?
The documentation mentions town-to-town anchor recovery, and recovery between sessions, but is that including device restarts and app quits? There are no clues about how persistent it all is.
Hello,
I've noticed that when I have my ARSession run the sceneReconstruction provider and the world tracking provider at the same time, I receive no scene reconstruction mesh updates. My catch closure doesn't receive any errors, it just doesn't send anything to the async list.
If I run just the scene reconstruction provider by itself, then I do get mesh updates.
Is this a bug? Is it expected that it's not possible to do this?
Thank you
The Photos app on VisionOS does not apply a blurry navigation bar background to the top of the photos views. Instead if has a transparent navigation bar with some stylized floating buttons.
How can I mimic this in my own SwiftUI VisionOS app?
As a user, when viewing a photo or image, I want to be able to tell Siri, “add this to ”, similar to example from the WWDC presentation where a photo is added to a note in the notes app.
Is this... possible with app domains as they are documented?
I see domains like open-file and open-photo, but I don't know if those are appropriate for this kind of functionality?