Post

Replies

Boosts

Views

Activity

Black Screen After Dismissing Modal Controller
Hey All,Been digging around the internet looking for this one, and while stackoverflow has some relevant solutions, none are working for me.My View Hierarchy is the followingView--->UISplitViewController.view ( set as a child viewController )--------> rootViewController.view (set as the mainViewController of the splitView)--------> detailViewController.view (set as the detailViewController of the splitview)Via the iPhone 6 simulator(split view is always collapsed) I present a modal viewcontroller with the following code: UINavigationController *navigationController = [[UINavigationController alloc] initWithRootViewController:viewController]; [navigationController.navigationBar setBarStyle:UIBarStyleBlack]; [navigationController setModalPresentationStyle:UIModalPresentationPopover]; navigationController.popoverPresentationController.sourceView = view; navigationController.popoverPresentationController.barButtonItem = barButtonItem; navigationController.popoverPresentationController.delegate = self; [self presentViewController:nav animated:YES completion:nil];I dissmiss the presented controller from that viewController by calling:[self dismissViewControllerAnimated:true completion:nil];If I set animated to "false" I dont have any problems, but it looks bad and doesnt make sense.I see some posts regarding this and custom presenatation methods, but Im not using anything custom here.Any Help is appreciated!EDIT:On iPhone the ModalPresentationStyle should default to UIModalPresentationOverFullScreen, so I tried setting the presentationStyle directly to that, and it worked!If I set the presentationStyle to "FullScreen" I get the same behavior, a black screen after dismissing.
Topic: UI Frameworks SubTopic: UIKit Tags:
2
0
3.8k
Apr ’21
RealityKit or ARKit? Which is right for me?
I want to create a feature where a user can stick images down my app onto their walls. I want to persist their placements between launches and use pinching a panning gestures to manipulate the images. I see lots of articles going back a few years that show how to do this in ARKit, but going through WWDC videos I’m seeing a trend toward RealityKit, and am starting to think that’s the “right” thing to learn. Is RealityKit to most up to date secret sauce? Is there a sample project like this one but using RealityKit? https://developer.apple.com/documentation/arkit/environmental_analysis/placing_objects_and_handling_3d_interaction
1
0
2.1k
Jan ’22
Modify Reality Composer Asset in code?
Hello, I’ve noticed that when I set the image of a picture frame asset in Reality Composer it will change its size and aspect ratio to match the image. That’s pretty nice! I would like to let a user dynamically modify that picture while running the app. Is this possible? Or are the models properties you set in the composer locked in when you export?
1
0
992
Jan ’22
WorldMap doesn't contain all Anchors?
Background So, I've got an anchor that I add to my Session after performing a raycast from a user's tap. This anchor is named "PictureAnchor". This anchor is not getting saved in my scene's world map, and I'm not sure why. Information Gathering I keep an eye on my session by outputting some information in func session(_ session: ARSession, didUpdate frame: ARFrame) As the ARFrame's are processed I look at the scene's anchors via sceneView.scene.anchors.filter({$0.name == "PictureAnchor" and I see that my anchor is present in the sceneAnchors. However, when I do frame.anchors.filter to check the anchors of the ARFrame itself, my PictureAnchor is never present. Furthermore, if I "save" the worldMap, an Anchor named PictureAnchor is not present. Note: I could be totally wrong on how to read the data inside a saved world map, but I'm taking the anchors array at face value. Other Information I've noticed that the AR Persistence sample project actually checks for the anchor to be present in the ARFrame's anchors before permitting a save, but this condition is never happening for me. I also noticed that my scene can have over 100 anchors, and the frame can have over 40, but only around 8 or 16 anchors are saved to the world map. Main Question Restated So, my main question is, why is my user-added "PictureAnchor" not present in the ARWorldMap, when I save my scene's map? I see that it's present in the scene's anchors, but not present in the ARFrame's anchors. A model entity is visible in the scene after being attached to this anchor as well.
1
0
909
Feb ’22
Disable occlusion for collaboration debugging?
Hello, I’m noticing that during a collaborative session anchors created on the host device are appearing in a different location on client devices and it’s making it challenging to test other collaboration logic. For example, when the client places a textured plane mesh at an anchor placed by the host, this placement can sometimes be considered as behind a surface and it gets clipped by RealityKits mesh occlusion. I’d prefer if I could see it floating in space when testing so I can see that something is happening. Im drawing a blank on if there are any debugging options to help me out. Nothing in render or debug options jumped out at me. Thoughts?
1
0
1.2k
Feb ’22
ARView "showAnchorGeometry" interpretation help?
When using the showAnchorGeometry I see lots of green surface anchors in my scene and it has been really helpful for debugging the placement of objects when I tap the screen of my device. But I also get some blue shapes too, and I'm not quite sure what those mean... Is there a document that explains what showAnchorGeometry is actually... showing? Googling "showAnchorGeometry blue" wasn't helpful! (I promise I tried)
1
0
1.4k
Dec ’22
Consistent spacing on a grid of ContainerRelativeShapes?
Hello, I'm trying to make a grid of container-relative shapes, where outside gutters match the gutters in between the items. This stickiest part of this problem is the fact that calling .inset on a ContainerRelativeShape doubles the gutter in between the items. I've tried LazyVGrid, and an HStack of VStacks, and they all have this double gutter in between. I think I could move forward with some gnarly frame math, but I was curious if I'm missing some SwiftUI layout feature that could make this easier and more maintainable.
1
0
1k
Apr ’22
AppIntent with Input from share sheet or previous step?
I have the following parameter:     @Parameter(title: "Image", description: "Image to copy", supportedTypeIdentifiers: ["com.image"], inputConnectionBehavior: .connectToPreviousIntentResult)     var imageFile: IntentFile? When I drop my AppIntent into a shortcut, though, I am unable to connect this parameter to the output of the previous step. Given the documentation I have no idea how to achieve this, if the above, is not the correct way to do so.
2
0
1.8k
Aug ’22
Opening a SwiftUI app window to a particular place via intent?
It is possible via the new AppIntents framework to open your app from via a shortcuts intent, but I am currently very confused about how to ensure that a particular window is opened in a SwiftUI-runtime app. If the use says "Open View A" via a shortcuts, or Siri, I'd like to make sure it opens the window for "View A", though a duplicate window could be acceptable too. The WWDC22 presentation has the following: @MainActor func perform() async throws -> some IntentResult { Navigator.shared.openShelf(.currentlyReading) return .result() } Where, from the perform method of the Intent structure, they tell an arbitrary Navigator (code not provided) to just open a view of the app. (How convenient!) But for a multiwindow swiftUI app, I'm not sure how to make this work. @Environment variables are not available within the Intent struct, and even if I did have a "Navigator Singleton", I'm not sure how it could get the @Environment for openWindow since it's a View environment. AppIntents exist outside the View environment tree AFAIK. Any Ideas? I'd be a little shocked if this is a UIKit only sort of thing, but at the same time... ya never know.
2
0
2.6k
Aug ’22
Debugging Max Volume size - GeometryReader3D units?
Hello, I'm curious if anyone has some useful debug tools for out-of-bounds issues with Volumes. I am opening a volume with a size of 1m, 1m, 10cm. I am adding a RealityView with a ModelEntity that is 0.5m tall and I am seeing the model clip at the top and bottom. I find this odd, because I feel like it should be within the size of the Volume.... I was curious what size SwiftUI says the Volume's size is so I tried using a GeometryReader3D to tell me.... GeometryReader3D { proxy in VStack { Text("\(proxy.size.width)") Text("\(proxy.size.height)") Text("\(proxy.size.depth)") } .padding().glassBackgroundEffect() } Unfortunately I get, 680, 1360, and 68. I'm guessing these units are in points, but that's not very helpful. The documentation says to use real-world units for Volumes, but none of the SwiftUI frame setters and getters appear to support different units. Is there a way to convert between the two? I'm not clear if this is a bug or a feature suggestion.
1
0
883
Oct ’23
[Newbie] Why does my ShaderGraphMaterial appear distorted?
Disclaimer: I am new to all things 3D. There could be a variety of things wrong with what I'm doing that are not unique to RealityKit. Any domain info would be appreciated. So, I'm following, what I think are, the recommended steps to import a shader-node material from reality composer pro and apply it to another modelEntity. I do the following: guard let entity = try? Entity.load(named: "Materials", in: RealityKitContent.realityKitContentBundle) else { return model } let materialEntity = entity.findEntity(named: "materialModel") as? ModelEntity guard let materialEntity else { return model } I then configure a property on it like so: guard var material = materialEntity.model?.materials[0] as? ShaderGraphMaterial else { return model } try coreMaterial.setParameter(name: "BaseColor", value: .color(matModel.matCoreUIColor)) I then apply it. This is what my texture looks like in RealityComposer: I notice that my rendered object has distortions in the actual RealityView. Note the diagonal lines that appear "Stretched". What could be doing this? I thought Node Shaders were supposed to be more resilient to distortions like this? I'm not sure if I've got a bug or if I'm using it wrong. FWIW, this is a shader based on apple's felt material shader. My graph looks like this: Thanks
2
0
1.1k
Aug ’23
Detecting Tap Location on detected "PlaneAnchor"? Replacement for Raycast?
While PlaneAnchors are still not generated by the PlaneDetectionProvider in the simulator, I am still brainstorming how to detect a tap on one of the planes. In an iOS ARKit application I could use a raycastQuery on existingPlaneGeometry to make an anchor with the raycast result's world transform. I've not yet found the VisionOS replacement for this. A possible hunch is that I need to install my own mesh-less PlaneModelEntities for each planeAnchor that's returned by the PlaneDetectionProvider. From there I can use a TapGesture targeted to those models? And then I could build a an WorldAnchor from the tap location on those entities. Anyone have any ideas?
1
0
1.1k
Aug ’23
Why does this entity appear behind spatial tap collision location?
I am trying to make a world anchor where a user taps a detected plane. How am I trying this? First, I add an entity to a RealityView like so: let anchor = AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 2.0]), trackingMode: .continuous) anchor.transform.rotation *= simd_quatf(angle: -.pi / 2, axis: SIMD3<Float>(1, 0, 0)) let interactionEntity = Entity() interactionEntity.name = "PLANE" let collisionComponent = CollisionComponent(shapes: [ShapeResource.generateBox(width: 2.0, height: 2.0, depth: 0.02)]) interactionEntity.components.set(collisionComponent) interactionEntity.components.set(InputTargetComponent()) anchor.addChild(interactionEntity) content.add(anchor) This: Declares an anchor that requires a wall 2 meters by 2 meters to appear in the scene with continuous tracking Makes an empty entity and gives it a 2m by 2m by 2cm collision box Attaches the collision entity to the anchor Finally then adds the anchor to the scene It appears in the scene like this: Great! Appears to sit right on the wall. I then add a tap gesture recognizer like this: SpatialTapGesture() .targetedToAnyEntity() .onEnded { value in guard value.entity.name == "PLANE" else { return } var worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene) let pose = Pose3D(position: worldPosition, rotation: value.entity.transform.rotation) let worldAnchor = WorldAnchor(transform: simd_float4x4(pose)) let model = ModelEntity(mesh: .generateBox(size: 0.1, cornerRadius: 0.03), materials: [SimpleMaterial(color: .blue, isMetallic: true)]) model.transform = Transform(matrix: worldAnchor.transform) realityViewContent?.add(model) I ASSUME This: Makes a world position from the where the tap connects with the collision entity. Integrates the position and the collision plane's rotation to create a Pose3D. Makes a world anchor from that pose (So it can be persisted in a world tracking provider) Then I make a basic cube entity and give it that transform. Weird Stuff: It doesn't appear on the plane.. it appears behind it... Why, What have I done wrong? The X and Y of the tap location appears spot on, but something is "off" about the z position. Also, is there a recommended way to debug this with the available tools? I'm guessing I'll have to file a DTS about this because feedback on the forum has been pretty low since labs started.
2
0
1.5k
Oct ’23