Post

Replies

Boosts

Views

Activity

How does an indirect drag gesture work?
Hello, I’ve got a few questions about drag gestures on VisionOS in Immersive scenes. Once a user initiates a drag gesture are their eyes involved anymore in the gesture? If not and the user is dragging something farther away, how far can they move it using indirect gestures? I assume the user’s range of motion is limited because their hands are in their lap, so could they move something multiple meters along a distant wall? How can the user cancel the gesture If they don’t like the anticipated / telegraphed result? I’m trying to craft a good experience and it’s difficult without some of these details. I have still not heard back on my devkit application. Thank you for any help.
2
0
729
Dec ’23
View count of open SecurityScoped Resources?
Hello, I'm trying to determine if my application is not releasing all security scoped resources and I'm curious if there's a way to view the count of all currently accessed URLs. I am balancing all startAccessingSecurityScopedResource calls that return true with a stopAccessingSecurityScopedResource, but sometimes my application is unresponsive when my mac wakes from sleep. Console logs indicate some Sandboxing issues. Unresponsiveness is resolved by a force-quit and restart of the application. I'd like to try and observe what's going on with the number of Security Scoped resources to get to the bottom of this. Is it possible?
2
0
658
May ’24
[18.2b2] How do I test an OpenIntent?
So, I've declared an AppIntent that indicates my app can "Open files" that conform to UTType.Image. I've got a @AssistantEntity(schema: .files.file) and a @AssistantIntent(schema: .files.openFile) declared. So I navigate to the files app, quicklook an image, and open type-to-siri. I tell siri "open this in " and all it does is act like "open ". No breakpoint is hit in my intent's perform method. Am I doing something wrong? How can I test these cross-app behaviors? Are they... not actually possible? Does an "OpenIntent" only work on my app's own URLs and not on file URLs from other apps?
2
1
504
Nov ’24
[Getting Started] Selecting default navLink on startup.
So, I've got a SwiftUI app with a triple column splitView. When the app starts on an 11 inch iPad, the "primary" column is offscreen. The primaryColumn has a List full of navigationLinks. Like so: List { ForEach(items, id: \.itemID) { item in NavigationLink(tag: item, selection: $selectedItem) {               ItemDetailsView(item: item) ... Now, the selection of the first Column in the split view cascades through the rest of the app, so populating it is pretty important. I've tried having the selectedItem be set from an EnvironmentObject. I've also tried having it set in onAppear. Everything I try only causes a selection the "pop into place" whenever I expose the primary column of the sidebar. Am I going about this the wrong way? Is it because the sidebar is hidden by default?
1
0
824
Jul ’21
RealityKit or ARKit? Which is right for me?
I want to create a feature where a user can stick images down my app onto their walls. I want to persist their placements between launches and use pinching a panning gestures to manipulate the images. I see lots of articles going back a few years that show how to do this in ARKit, but going through WWDC videos I’m seeing a trend toward RealityKit, and am starting to think that’s the “right” thing to learn. Is RealityKit to most up to date secret sauce? Is there a sample project like this one but using RealityKit? https://developer.apple.com/documentation/arkit/environmental_analysis/placing_objects_and_handling_3d_interaction
1
0
2.1k
Jan ’22
Modify Reality Composer Asset in code?
Hello, I’ve noticed that when I set the image of a picture frame asset in Reality Composer it will change its size and aspect ratio to match the image. That’s pretty nice! I would like to let a user dynamically modify that picture while running the app. Is this possible? Or are the models properties you set in the composer locked in when you export?
1
0
992
Jan ’22
WorldMap doesn't contain all Anchors?
Background So, I've got an anchor that I add to my Session after performing a raycast from a user's tap. This anchor is named "PictureAnchor". This anchor is not getting saved in my scene's world map, and I'm not sure why. Information Gathering I keep an eye on my session by outputting some information in func session(_ session: ARSession, didUpdate frame: ARFrame) As the ARFrame's are processed I look at the scene's anchors via sceneView.scene.anchors.filter({$0.name == "PictureAnchor" and I see that my anchor is present in the sceneAnchors. However, when I do frame.anchors.filter to check the anchors of the ARFrame itself, my PictureAnchor is never present. Furthermore, if I "save" the worldMap, an Anchor named PictureAnchor is not present. Note: I could be totally wrong on how to read the data inside a saved world map, but I'm taking the anchors array at face value. Other Information I've noticed that the AR Persistence sample project actually checks for the anchor to be present in the ARFrame's anchors before permitting a save, but this condition is never happening for me. I also noticed that my scene can have over 100 anchors, and the frame can have over 40, but only around 8 or 16 anchors are saved to the world map. Main Question Restated So, my main question is, why is my user-added "PictureAnchor" not present in the ARWorldMap, when I save my scene's map? I see that it's present in the scene's anchors, but not present in the ARFrame's anchors. A model entity is visible in the scene after being attached to this anchor as well.
1
0
909
Feb ’22
File type limitations of sendResource?
Hello, Im noticing a behavior when I try to send a package file as a resource to peer. A file like this is basically a folder with an extension, and, and despite receiving a progress object from the send call, I’m not seeing it rise past 0.0%. Attaching it to a ProgressView also does not show any progress. The completion handler of the send is never called with an error and the receiver will only get the “finished receiving” callback if the host cancels or disconnects. I didn’t see anything in the sendResource documentation about not supporting bundle files, but it would not surprise me. Any thoughts?
1
0
899
Feb ’22
Disable occlusion for collaboration debugging?
Hello, I’m noticing that during a collaborative session anchors created on the host device are appearing in a different location on client devices and it’s making it challenging to test other collaboration logic. For example, when the client places a textured plane mesh at an anchor placed by the host, this placement can sometimes be considered as behind a surface and it gets clipped by RealityKits mesh occlusion. I’d prefer if I could see it floating in space when testing so I can see that something is happening. Im drawing a blank on if there are any debugging options to help me out. Nothing in render or debug options jumped out at me. Thoughts?
1
0
1.2k
Feb ’22
ARView "showAnchorGeometry" interpretation help?
When using the showAnchorGeometry I see lots of green surface anchors in my scene and it has been really helpful for debugging the placement of objects when I tap the screen of my device. But I also get some blue shapes too, and I'm not quite sure what those mean... Is there a document that explains what showAnchorGeometry is actually... showing? Googling "showAnchorGeometry blue" wasn't helpful! (I promise I tried)
1
0
1.4k
Dec ’22
Consistent spacing on a grid of ContainerRelativeShapes?
Hello, I'm trying to make a grid of container-relative shapes, where outside gutters match the gutters in between the items. This stickiest part of this problem is the fact that calling .inset on a ContainerRelativeShape doubles the gutter in between the items. I've tried LazyVGrid, and an HStack of VStacks, and they all have this double gutter in between. I think I could move forward with some gnarly frame math, but I was curious if I'm missing some SwiftUI layout feature that could make this easier and more maintainable.
1
0
1k
Apr ’22
Debugging Max Volume size - GeometryReader3D units?
Hello, I'm curious if anyone has some useful debug tools for out-of-bounds issues with Volumes. I am opening a volume with a size of 1m, 1m, 10cm. I am adding a RealityView with a ModelEntity that is 0.5m tall and I am seeing the model clip at the top and bottom. I find this odd, because I feel like it should be within the size of the Volume.... I was curious what size SwiftUI says the Volume's size is so I tried using a GeometryReader3D to tell me.... GeometryReader3D { proxy in VStack { Text("\(proxy.size.width)") Text("\(proxy.size.height)") Text("\(proxy.size.depth)") } .padding().glassBackgroundEffect() } Unfortunately I get, 680, 1360, and 68. I'm guessing these units are in points, but that's not very helpful. The documentation says to use real-world units for Volumes, but none of the SwiftUI frame setters and getters appear to support different units. Is there a way to convert between the two? I'm not clear if this is a bug or a feature suggestion.
1
0
883
Oct ’23
Detecting Tap Location on detected "PlaneAnchor"? Replacement for Raycast?
While PlaneAnchors are still not generated by the PlaneDetectionProvider in the simulator, I am still brainstorming how to detect a tap on one of the planes. In an iOS ARKit application I could use a raycastQuery on existingPlaneGeometry to make an anchor with the raycast result's world transform. I've not yet found the VisionOS replacement for this. A possible hunch is that I need to install my own mesh-less PlaneModelEntities for each planeAnchor that's returned by the PlaneDetectionProvider. From there I can use a TapGesture targeted to those models? And then I could build a an WorldAnchor from the tap location on those entities. Anyone have any ideas?
1
0
1.1k
Aug ’23