Asking with the WWDC-10220 tag because Fruta is the sample code for this presentation.
When launching Fruta on a landscape iPad Pro 11", the "State" of the application is completely empty until the user taps the back button. After tapping back everything appears to pop into place.
Is this expected behavior in SwiftUI when using split screens?
NavigationLinks are finicky and I'm expecting the programmatic setting of the primary column "selection" to be resolved on launch, not when the user taps back.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Anything new this year to support reordering outline group items or items across sections in a multi-section list?
I really want to code my sidebar in swiftUI but user driven ordering is a must for me.
I've noticed that the bounds of my ModelEntity is not impacted when I transform one of the mesh's Joints.
I've attached an image those demonstrates this. It appears it will cause issues with the collision bounding box as well.
Is this a bug or user error? Not many of this framework's methods are documented.
World Anchor from SpatialTapGesture ??
At 19:56 in the video, it's mentioned that we can use a SpatialTapGesture to "identify a position in the world" to make a world anchor.
Which API calls are utilized to make this happen?
World anchors are created with 4x4 matrices, and a SpatialTapGestures doesn't seem to generate one of those.
Any ideas?
I want to place a ModelEntity at an AnchorEntity's location, but not as a child of the AnchorEntity. ( I want to be able to raycast to it, and have collisions work.)
I've placed an AnchorEntity in my scene like so:
AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous)
In my RealityView update closure, I print out this entity's position relative to "nil" like so:
wallAnchor.position(relativeTo: nil)
Unfortunately, this position doesn't make sense. It's very close to zero, even though it appears several meters away.
I believe this is because AnchorEntities have their own self contained coordinate spaces that are independent from the scene's coordinate space, and it is reporting its position relative to its own coordinate space.
How can I bridge the gap between these two?
WorldAnchor has an originFromAnchorTransform property that helps with this, but I'm not seeing something similar for AnchorEntity.
Thank you
I’ve got the new SwiftUi webview in a sheet. When I pull down the contents from the top of the page I expect it to dismiss the sheet, but it does not.
Is there a workaround or modifier I’m missing?
So, I've got a SwiftUI app with a triple column splitView.
When the app starts on an 11 inch iPad, the "primary" column is offscreen.
The primaryColumn has a List full of navigationLinks.
Like so:
List {
ForEach(items, id: \.itemID) { item in
NavigationLink(tag: item, selection: $selectedItem) {
ItemDetailsView(item: item)
...
Now, the selection of the first Column in the split view cascades through the rest of the app, so populating it is pretty important.
I've tried having the selectedItem be set from an EnvironmentObject. I've also tried having it set in onAppear.
Everything I try only causes a selection the "pop into place" whenever I expose the primary column of the sidebar.
Am I going about this the wrong way?
Is it because the sidebar is hidden by default?
I want to create a feature where a user can stick images down my app onto their walls. I want to persist their placements between launches and use pinching a panning gestures to manipulate the images.
I see lots of articles going back a few years that show how to do this in ARKit, but going through WWDC videos I’m seeing a trend toward RealityKit, and am starting to think that’s the “right” thing to learn.
Is RealityKit to most up to date secret sauce? Is there a sample project like this one but using RealityKit?
https://developer.apple.com/documentation/arkit/environmental_analysis/placing_objects_and_handling_3d_interaction
Hello, I’ve noticed that when I set the image of a picture frame asset in Reality Composer it will change its size and aspect ratio to match the image. That’s pretty nice!
I would like to let a user dynamically modify that picture while running the app. Is this possible? Or are the models properties you set in the composer locked in when you export?
Background
So, I've got an anchor that I add to my Session after performing a raycast from a user's tap.
This anchor is named "PictureAnchor".
This anchor is not getting saved in my scene's world map, and I'm not sure why.
Information Gathering
I keep an eye on my session by outputting some information in
func session(_ session: ARSession, didUpdate frame: ARFrame)
As the ARFrame's are processed I look at the scene's anchors via sceneView.scene.anchors.filter({$0.name == "PictureAnchor"
and I see that my anchor is present in the sceneAnchors.
However, when I do
frame.anchors.filter to check the anchors of the ARFrame itself, my PictureAnchor is never present.
Furthermore, if I "save" the worldMap, an Anchor named PictureAnchor is not present.
Note: I could be totally wrong on how to read the data inside a saved world map, but I'm taking the anchors array at face value.
Other Information
I've noticed that the AR Persistence sample project actually checks for the anchor to be present in the ARFrame's anchors before permitting a save, but this condition is never happening for me.
I also noticed that my scene can have over 100 anchors, and the frame can have over 40, but only around 8 or 16 anchors are saved to the world map.
Main Question Restated
So, my main question is, why is my user-added "PictureAnchor" not present in the ARWorldMap, when I save my scene's map?
I see that it's present in the scene's anchors, but not present in the ARFrame's anchors.
A model entity is visible in the scene after being attached to this anchor as well.
Hello,
Im noticing a behavior when I try to send a package file as a resource to peer.
A file like this is basically a folder with an extension, and, and despite receiving a progress object from the send call, I’m not seeing it rise past 0.0%. Attaching it to a ProgressView also does not show any progress.
The completion handler of the send is never called with an error and the receiver will only get the “finished receiving” callback if the host cancels or disconnects.
I didn’t see anything in the sendResource documentation about not supporting bundle files, but it would not surprise me.
Any thoughts?
Hello,
I’m noticing that during a collaborative session anchors created on the host device are appearing in a different location on client devices and it’s making it challenging to test other collaboration logic.
For example, when the client places a textured plane mesh at an anchor placed by the host, this placement can sometimes be considered as behind a surface and it gets clipped by RealityKits mesh occlusion.
I’d prefer if I could see it floating in space when testing so I can see that something is happening.
Im drawing a blank on if there are any debugging options to help me out. Nothing in render or debug options jumped out at me.
Thoughts?
When using the showAnchorGeometry I see lots of green surface anchors in my scene and it has been really helpful for debugging the placement of objects when I tap the screen of my device.
But I also get some blue shapes too, and I'm not quite sure what those mean... Is there a document that explains what showAnchorGeometry is actually... showing?
Googling "showAnchorGeometry blue" wasn't helpful! (I promise I tried)
Hello, I'm trying to make a grid of container-relative shapes, where outside gutters match the gutters in between the items.
This stickiest part of this problem is the fact that calling .inset on a ContainerRelativeShape doubles the gutter in between the items.
I've tried LazyVGrid, and an HStack of VStacks, and they all have this double gutter in between.
I think I could move forward with some gnarly frame math, but I was curious if I'm missing some SwiftUI layout feature that could make this easier and more maintainable.
Hello,
I'm curious if anyone has some useful debug tools for out-of-bounds issues with Volumes.
I am opening a volume with a size of 1m, 1m, 10cm.
I am adding a RealityView with a ModelEntity that is 0.5m tall and I am seeing the model clip at the top and bottom. I find this odd, because I feel like it should be within the size of the Volume....
I was curious what size SwiftUI says the Volume's size is so I tried using a GeometryReader3D to tell me....
GeometryReader3D { proxy in
VStack {
Text("\(proxy.size.width)")
Text("\(proxy.size.height)")
Text("\(proxy.size.depth)")
}
.padding().glassBackgroundEffect()
}
Unfortunately I get, 680, 1360, and 68. I'm guessing these units are in points, but that's not very helpful. The documentation says to use real-world units for Volumes, but none of the SwiftUI frame setters and getters appear to support different units.
Is there a way to convert between the two? I'm not clear if this is a bug or a feature suggestion.