Post

Replies

Boosts

Views

Activity

Reply to StaticConfiguration.init(kind:provider:placeholder:content:) deprecated warning
Is there any documentation available to indicate the difference between snapshot and placeholder. While I do see that @pdm noted that snapshot is asynchronous, I was under the impression that snapshot's goal is to provide a quick representation of the widget, as will be previewed in the widget gallery. I was also under the impression that placeholder is relevant in cases where the widget will be rendered on the home screen before data is available. In effect; Snapshot - Should provide real data, asynchronously, but should return this data as quickly as possible for rendering in the widget gallery. Placeholder - Should provide data as quickly as possible, synchronously, which will be (in a future beta) automatically rendered as redacted where relevant to provide a rendered UI. Timeline - The standard timeline entries can be provided asynchronously, and as they do not need to be provided quickly (necessarily), can gather the relevant data from network resources or the app for optimal experience. Do I have that right?
Topic: App & System Services SubTopic: General Tags:
Jul ’20
Reply to ARView, RealityKit in SwiftUI? Target: ARKit 4
Yes, in essence, you'll be adding your ARView to SwiftUI using either an UIViewControllerRepresentable or UIViewRepresentable, depending on your needs. It is worth noting that if you create a new AR project in Xcode 12, you can opt to use the RealityKit content technology and SwiftUI as your interface. This will generate your main ContentView and ARView, embedding it in the UIViewRepresentable automatically for you.
Topic: UI Frameworks SubTopic: SwiftUI Tags:
Aug ’20
Reply to Video capture in AR QuickLook mode
Are you using AR QuickLook outside of your app (meaning the preview of an AR object in Messages, Mail, Files, etc.) or within your app? If within your app, you could consider adding your AR to the "world" and then writing the video frames coming from the AR camera to a file. This would be a much more complex process, requiring you to set up an ARWorldTrackingConfiguration (or comparable, depending on your needs), an ARSessionDelegate (to receive the video frames as pixel buffers), and an AVAssetWriter to write the video frames to a file. While much more involved, you would have far more control over the video file you create, which would be perhaps ideal for receiving better resolution for the recording.
Topic: Spatial Computing SubTopic: ARKit Tags:
Aug ’20
Reply to Add a UIImage to a Plane in RealityKit
A UIImage is effectively image data that has been loaded into a UIKit resource. Based on this, the biggest question is where is your image data coming from, and how are you turning it into a UIImage? Your best bet is to convert that UIImage into a local file, which you could then load as a TextureResource and apply to your model. For example, let's say you already have a UIImage prepared in your app, dynamically generated, as you indicated in your question (my example assumes your UIImage is named myImage); let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] if let data = myImage.pngData() { 		let filePath = documentsDirectory.appendingPathComponent("sky.png")     try? data.write(to: filePath)     DispatchQueue.main.async {            self.texture = try? TextureResource.load(contentsOf: filePath)      } } In this example, I am converting my UIImage to Data, saving it locally, then loading it as a TextureResource which could be applied to a model. Subsequently, if your UIImage is an image being downloaded from the web, you could take an approach of using Combine; var loadTexture: Cancellable? let url = URL(string: "theimageurl.com/image.png") loadTexture = URLSession.shared.dataTaskPublisher(for: url!) 	 .receive(on: RunLoop.main)    .map { UIImage(data: $0.data) }    .sink(receiveCompletion: { (completion) in    			loadTexture?.cancel()    }, receiveValue: { (image) in    			let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]         let filePath = documentsDirectory.appendingPathComponent("sky.png")         if let data = image?.pngData() {         try? data.write(to: filePath)        		self.texture = try! TextureResource.load(contentsOf: filePath)         } }) It is worth noting that these examples do not take into account proper error handling or performance considerations (you likely would want to save the UIImage data somewhere other than the documents directory if it is data that can be purged/won't be reused again). Alongside that, if your UIImage is a relatively large file, you may want to consider using TextureResoruce.loadAsync(...), rather than TextureResource.load(...), as it may provide your users a more responsive app experience. You can also consider saving the data as a JPEG representation, rather than PNG, but this is all dependent on your use case.
Topic: Spatial Computing SubTopic: ARKit Tags:
Aug ’20
Reply to Face tracking pre-recorded video with ARKit
No, ARKit is not capable of using pre-recorded video for face tracking. ARKit requires leveraging data from the built-in front-facing camera, and is (at least of writing this) only capable of doing so in real-time. It can not be fed video files to perform such functionality with. You should file a Feedback Report with Apple if that is a feature that would interest you.
Topic: Spatial Computing SubTopic: ARKit Tags:
Aug ’20
Reply to Exporting Point Cloud as 3D PLY Model
Thank you for your reply, @gchiste. That's a fair question, and frankly, I'm embarrassed to say my response is, "I don't know." Having looked at both the Visualizing and Interacting with a Reconstructed Scene - https://developer.apple.com/documentation/arkit/world_tracking/visualizing_and_interacting_with_a_reconstructed_scene and "Visualizing a Point Cloud using Scene Depth" - https://developer.apple.com/documentation/arkit/visualizing_a_point_cloud_using_scene_depth sample projects, I find myself intrigued by the possibility of using the LiDAR camera to recreate 3D environments. While I am not wholly expecting a photorealistic 3D representation of the world around me, some samples of both the Reconstructed Scene/Point Cloud projects, as evidenced on YouTube/Twitter, show some amazing potential for using the gathered representations to bring into 3D modeling programs for a variety of use cases. The Point Cloud sample project creates a sort of whimsical representation of the environment, and I would love to be able to take what I see on screen when running that sample project and, effectively, export it as a 3D model. I see the question come up a bit around the Apple Developer Forums and other technical resources, though I think a large disconnect is knowing what technologies and efforts one would need to learn to be able to take those sample projects and create a 3D model. The ARMeshGeometry is a bit more straightforward, it seems, but knowing what general Apple frameworks one would need to connect those points/particles being shown in the Visualizing a Point Cloud using Scene Depth sample project to some sort of model output, I think, would intrigue many.
Topic: Graphics & Games SubTopic: Metal Tags:
Aug ’20
Reply to Exporting Point Cloud as 3D PLY Model
Hi @gchiste, Thank you so much for your detailed reply. This really helped to break down the steps necessary to gather the point cloud data coordinates, color values, and how to connect this data to a SceneKit scene that could be rendered out as a 3D model. I am thrilled with this and am working my way through the Visualizing a Point Cloud Using Scene Depth sample project to adapt your logic, both for understanding and testing. If I may ask, basing this question as a follow-up to your detailed guidance and looking at the Visualizing a Point Cloud Using Scene Depth sample project; Use Metal (as demonstrated in the point cloud sample project) to unproject points from the depth texture into world space. I notice that this function exists in the project’s Shaders.metal file (as the unprojectVertex call). Is it possible to use the results of the call to this function and save the results to a MTLBuffer, or does the unprojectVertex call need to be adapted to run on the CPU as each frame is rendered, and those results then saved to a MTLBuffer? Perhaps I’m getting away from the root of the question, but I am unsure if using what exists in Shaders.metal can yield the position/color data I need, or if I need to develop my own function to do that outside of Shaders.metal?
Topic: Graphics & Games SubTopic: Metal Tags:
Aug ’20
Reply to Exporting Point Cloud as 3D PLY Model
Thank you for your replies, @gchiste. While working in Metal and SceneKit is a learning experience, this sample project, and your guidance, certainly makes a world of difference in being able to gather a strong foundation in how this technology works, and how to best apply these learnings to our own apps. Your follow-up regarding the particlesBuffer position makes sense, and I have been able to successfully print the position of specific points by hard-coding a particlesBuffer index, per your example. While you've been immensely helpful and much of this is where I need to learn, I'm wondering if you could note where one would add the commandBuffer.addCompletedHandler { [self] _ in		 	 let position = particlesBuffer[9].position } call (I presume in the renderer's accumulatePoints() function). Where I'm struggling to wrap my head around is how to actually read the position from the particlesBuffer. Normally, I would expect to iterate over the elements and copy the position (or color) to a new array, though I find that particlesBuffer is not able to be iterated through. Determined to figure this out!
Topic: Graphics & Games SubTopic: Metal Tags:
Aug ’20
Reply to Are there any tutorials or guides for AR apps?
There are many resources and tutorials available for developing AR apps. Depending on your experience with iOS, you may find some of the terminology and thought processes easy to digest, whereas in other cases, since you're dealing with 3D objects, there can often be a different way of looking at things, given the use of planes, anchors, and coordinates. One resource that might be ideal for getting started is to create a sample AR app in Xcode. In Xcode 12, when you choose to start a new project, an Augmented Reality App is a template option to start with. Once you select that, you can choose from a "Content Technology" and "Interface." I've found that choosing RealityKit as the Content Technology and SwiftUI as the Interface allows Xcode to create a great deal of the boilerplate code, so you can focus more-so on the AR itself. Additionally, Reality Composer is a great tool for getting started with AR. It is accessible as a standalone for iPhone/iPad, as well as Mac through Xcode (Xcode -> Open Developer Tool -> Reality Composer). At the core, I think the question you'd have to ask is; am I looking to quickly get started creating 3D/AR content and seeing it come to life, or does it interest me more to know the inner-workings and have more nuanced control of how the AR experience works? Apple's Tracking and Visualizing Planes - https://developer.apple.com/documentation/arkit/world_tracking/tracking_and_visualizing_planes is a great starting point project, as are all of the ARKit sessions from past WWDC (available in Apple's Developer app).
Topic: Spatial Computing SubTopic: ARKit Tags:
Aug ’20
Reply to Loading Entities from a File
On iOS/iPadOS 14 (currently still in beta), your project launched and complied without issue for me. I was able to detect a surface and the AR box appeared as expected. While I recognize that may not be an overly helpful explanation to the issue you are facing, I certainly encourage you to try cleaning your build folder (within Xcode, Product -> Clean Build Folder) and try testing again, as well as ensuring your iPhone/iPad software (and Xcode software) are fully up to date. Just offering some confidence that your code does work, and did run successfully, on my end.
Topic: Programming Languages SubTopic: Swift Tags:
Aug ’20
Reply to Are there any tutorials or guides for AR apps?
Hi vlttnv, You are absolutely correct that referencing Apple's ARKit Developer Documentation - https://developer.apple.com/documentation/arkit will be your best resource as you dive into the world of ARKit and Augmented Reality. I find myself referencing that documentation multiple times per day and it certainly has become my strongest resource. I can recall having many similar questions when I began exploring building AR apps, and I hope that a few thoughts my be helpful as you continue your journey. With that said, please do refer to Apple's Developer Documentation and sample projects first and foremost. ARKit ARKit is the underlying framework that handles the "heavy lifting" of Augmented Reality experiences. ARKit configures the camera, gathers the relevant sensor data, and is responsible for detecting and locating the "anchors" that will tether your 3D content to the real world, as seen through the camera. In a sense, Augmented Reality is all about displaying 3D content in the real world, tethering your 3D content to anchors that are tracked and followed, making the 3D content appear as though it truly is in front of your user. As a whole, ARKit does the work to find those anchors, track those anchors, and handles the computations and augmentations to keep your 3D content tethered to those anchors, making the experience seem realistic. Anchors can come in a variety of forms. Anchors are most commonly planes (a horizontal plane, like a floor, table top, or the ground, or a vertical plane, like a wall, window, or door), but can also be faces (a human face), an image (where you provide your app an image, and when the camera detects that image, that becomes the "anchor" for your 3D content), an object (where you provide your app a 3D object, and when the camera detects that object in the real world, that object becomes the "anchor" for your 3D content), a body (for the purposes of tracking the movement of joints and applying that movement to a 3D character), a location (using ARGeoAnchors, which anchor your 3D content to a specific set of longitude/latitude/altitude coordinates, as a CLLocation from the CoreLocation framework, if in a supported location), or a mesh (if your device has a LiDAR scanner, ARKit becomes capable of detecting more nuanced planes, such as recognizing a floor plane vs. a table-top plane, or a door plane vs. a wall plane). In all, your 3D content has to be anchored to something in the real world, and ARKit handles finding these anchors and providing them to you for your use. Content Technology Whereas ARKit handles the heavy lifting of configuring the camera, finding anchors, and tracking those anchors, you have a choice of what type of Content Technology you plan to use to actually render/show your 3D content. The Content Technology is the framework doing the heavy lifting of either loading your 3D model (that you probably created elsewhere, such as a 3D modeling program, or in Reality Composer), or creating 3D content programmatically. There are four main choices for Content Technology; RealityKit - RealityKit was announced at WWDC 2019 and is the newest of the 3D graphics technologies available in iOS. Much like other 3D technologies available in iOS, RealityKit offers you the ability to load 3D models you may have created in other 3D modeling programs, create 3D content (such as boxes, spheres, text, etc.), as well as create 3D lights, cameras, and more. As described in the RealityKit Documentation - https://developer.apple.com/documentation/realitykit, RealityKit allows you to Simulate and render 3D content for use in your augmented reality apps. To your comment, RealityKit complements ARKit; ARKit gathers the information from the camera and sensors, RealityKit renders the 3D content.Sample Project: Creating Screen Annotations for Objects in an AR Experience - https://developer.apple.com/documentation/arkit/creating_screen_annotations_for_objects_in_an_ar_experience SceneKit - SceneKit is another popular choice for working with ARKit. SceneKit is wildly popular in iOS development for generating 3D content. Similar to RealityKit, SceneKit offers the ability to load and create 3D models, handle lighting, reflections, shadows, etc., and works hand-in-hand with ARKit. SceneKit is also popular in game development, and given that many developers have experience with SceneKit from developing 3D games, it is a great way to bring that understanding to the world of Augmented Reality, as much of the same principles from 3D game development can be applied to AR.Sample Project: Tracking and Visualizing Faces - https://developer.apple.com/documentation/arkit/tracking_and_visualizing_faces SpriteKit - SpriteKit is another popular choice for game development and its principles, when brought into the world of AR, can still be applied. SpriteKit is a highly performant framework, and deals traditionally in 2D content. Again, this a hugely popular framework already for iOS game development, and its ability to work hand-in-hand with ARKit allows developers with existing knowledge to implement AR experiences.Documentation: Providing 2D Virtual Content with SpriteKit - https://developer.apple.com/documentation/arkit/arskview/providing_2d_virtual_content_with_spritekit Metal - Metal is a low-level graphics framework that is hugely powerful. In its simplest form, Metal allows you to take control of the entire graphics pipeline, offering you the ability to develop experiences from the ground up while maintaining exceptional performance. Metal talks directly to your device's GPU, and can allow you to have more nuanced control of the functionality of how everything from the camera to your 3D content appears. All of the aforementioned frameworks are built on top of Metal, and all are built to offer the same incredible performance and security that Metal provides. If you find yourself needing to work more directly with the GPU, Metal is your best choice.Sample Project: Effecting People Occlusion in Custom Renderers - https://developer.apple.com/documentation/arkit/effecting_people_occlusion_in_custom_renderers It is worth saying that you will find that Apple's sample projects for ARKit leverage different content technologies at different times. I encourage you to review the sample projects relevant to the app you are building and see which may ideally fit your use case. (Adding a second reply with follow-up thoughts).
Topic: Spatial Computing SubTopic: ARKit Tags:
Aug ’20
Reply to Are there any tutorials or guides for AR apps?
Lifecycle Regardless of which Content Technology you choose, there are certain principles that apply across the board when creating an ARKit experience. Namely; Define the type of configuration relevant for your AR experience (world tracking, face tracking, body tracking, image tracking, object tracking, geolocation tracking), as well as any relevant parameters (if you're doing world tracking, perhaps you want to specify you are only looking for vertical planes to tether 3D content to). It is worth noting that you can somewhat mix-and-match different use cases (for example, as noted in the Combining User Face-Tracking and World Tracking - https://developer.apple.com/documentation/arkit/combining_user_face-tracking_and_world_tracking sample project, you can set up a world tracking configuration to find horizontal and vertical planes while still receiving face tracking information - that sample demonstrates generating facial expressions on a 3D character in the "real world," while using your face to drive those expressions). Configure any relevant configuration parameters, if supported (some examples would be specifying finding only horizontal planes vs. vertical planes, detecting meshes with their classifications, if your device has a LiDAR scanner, or configuring the environmental texturing parameters). Most configurations have a default set of parameters that even if you do not configure manually, you can have a great experience. Configure any parameters relevant to the view that will display the AR content (if you are working with RealityKit, that comes in the form of an ARView, if you are working with SceneKit, that comes in the form of an ARSCNView, if you are working with Metal, that comes in the form of a MTKView). This could include inserting physics or object occlusion if using an ARView from RealityKit, or debug options to evaluate performance when working in SceneKit. Each Content Technology has their own set of parameters for the view that can be configured. Configure a delegate for your AR Session so you can receive relevant updates from ARKit (such as receiving the camera frame, receiving callbacks when new anchors are added, updated, and removed, as well as interruptions and performance concerns to address that could impact the user experience). Run the session with the configuration you set up. This should then begin an AR session leveraging the configuration you have requested. Based on the type of anchor your configuration is looking for, each time a new anchor is added, you can receive a callback in your delegate method. From there, you can use that anchor to add your 3D content. Here's a sample of a very simple setup of an ARWorldTrackingConfiguration using RealityKit. This exists in my ViewController's viewDidLoad() method; arView.session.delegate = self let configuration = ARWorldTrackingConfiguration() configuration.environmentTexturing = .automatic arView.session.run(configuration) While that is as simple as it gets, ARKit is handling the work of setting up the camera, preparing to configure itself to look for any horizontal or vertical plane (the default) in the real world, apply automatic environment texturing for the more realistic appearance of content in the environment, and run the session. Once an anchor is found, you'll receive the didAdd anchors: [ARAnchor] callback in your ARSession delegate, to which point, you could call or generate 3D content and add it to your Content Technology's view hierarchy. Closing There is a great deal to learn about ARKit, and many different ways to build experiences. What your vision for your app is may help inform your choices of which underlying Content Technology you will choose. Do review the ARKit Developer Documentation - https://developer.apple.com/documentation/arkit and sample projects as a starting point. This community, too, has been very helpful to me, and hopefully will be able to assist as you move further into your development.
Topic: Spatial Computing SubTopic: ARKit Tags:
Aug ’20
Reply to IPad4 pro Lidar feature points
Perhaps someone from Apple will be able to comment on the technical specifications, but I just ran a test on iPhone 11 Pro Max and iPad Pro (2nd Generation), effectively sitting in the same position and panning each device the same direction. While the documentation for rawFeaturePoints - https://developer.apple.com/documentation/arkit/arframe/2887449-rawfeaturepoints does indicate that; ARKit does not guarantee that the number and arrangement of raw feature points will remain stable between software releases, or even between subsequent frames in the same session.  I would say my experience between both devices is extremely similar in terms of gathered rawFeaturePoints on a per-frame basis. My assumption may be wrong, but I do not believe the LiDAR scanner contributes to the rawFeaturePoints. Rather, the LiDAR scanner is generating ARMeshAnchors, effectively re-constructing the environment seen by the LiDAR scanner using a series of meshes. While the rawFeaturePoints loosely use the contours of real-world objects as an understanding of the environment and to create points for anchoring 3D content, the ARMeshAnchors provide a faster and more robust understanding of the environment, creating more realistic 3D experiences. Depending on your use case, that information may or may not be helpful, but hope that a quick tests does yield some thoughts.
Topic: App & System Services SubTopic: Core OS Tags:
Aug ’20
Reply to Overlaying different swiftUI views depending on the image detected with ARkit app.
Based on your desired goal, there are probably two things I'd change in this implementation to make it easier and more extensible to work with multiple ARReferenceImages. In func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor), I would likely retrieve the name of the image, and set up a statement that would allow me to call a SwiftUI view while passing the name of the image. Subsequently, based on your sample code, there really isn't a need to have two SwiftUI views; you could have one SwiftUI that takes a property (such as a name, an index, etc.) and shows the relevant text on the screen. For example, I might modify your func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) function to appear like so; func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9; guard let imageAnchor = anchor as? ARImageAnchor else {return nil} &#9; let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width,&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9; height: imageAnchor.referenceImage.physicalSize.height) &#9; let planeNode = SCNNode(geometry: plane)&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9; planeNode.eulerAngles.x = -.pi / 2 &#9; &#9; if let imageName = imageAnchor.referenceImage.name { &#9;&#9;&#9;imageController(for: planeNode, imageName: imageName) &#9; }&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9; let node = SCNNode()&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9; node.addChildNode(planeNode)&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9; return node&#9;&#9;&#9;&#9;&#9;&#9; } Then, you could modify your func imageOneController(for node: SCNNode) function, like so; func imageController(for node: SCNNode, imageName: imageName) {&#9;&#9;&#9;&#9;&#9;&#9; &#9; let imageView = UIHostingController(rootView: imageView(name: imageName))&#9;&#9;&#9;&#9; &#9; DispatchQueue.main.async {&#9;&#9;&#9; &#9;&#9;&#9;imageView.willMove(toParent: self)&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9;&#9;self.addChild(imageView)&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9;&#9;imageView.view.frame = CGRect(x: 0, y: 0, width: 500, height: 500)&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9;&#9;self.view.addSubview(imageView.view)&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9;&#9;self.showImageView(hostingVC: imageView, on: node)&#9;&#9;&#9;&#9;&#9;&#9; &#9; }&#9;&#9;&#9;&#9; } At that point, you could then show your image view in func showImageOne(hostingVC: UIHostingController<imageOne>, on node: SCNNode), as you are doing now, but doing it in a general way so that you do not need to create new views for each image. As such, I would modify func showImageOne(hostingVC: UIHostingController<imageOne>, on node: SCNNode) to be something like; func showImageView(hostingVC: UIHostingController<imageView>, on node: SCNNode) {&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9; let material = SCNMaterial()&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9; hostingVC.view.isOpaque = false&#9;&#9;&#9;&#9; &#9; material.diffuse.contents = hostingVC.view&#9;&#9;&#9;&#9; &#9; node.geometry?.materials = [material]&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9; hostingVC.view.backgroundColor = UIColor.clear&#9;&#9;&#9;&#9;&#9;&#9; } Lastly, you would want to modify your imageOne SwiftUI struct to be something more generic, like; import SwiftUI struct imageView: View { &#9; var imageName: String &#9; var body: some View {&#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9; Text("hello \(imageName)")&#9;&#9; &#9; } } In general, I am making some assumptions about what you are trying to do, but if you were to ever add more ARReferenceImages to your app, based on the methodology you have in place here, you'd have to keep creating new SwiftUI views and new functions for each one. Using some sort of an identifier (like the image's name) will allow you to more generally create your views and display them accordingly.
Topic: UI Frameworks SubTopic: SwiftUI Tags:
Aug ’20