Are there any tutorials or guides to follow on developing AR apps? From what I see in the documentation it is mostly a reference.
As someone new to developing AR apps for iOS I was wondering if there is some documentation that gives an overview of the general approach and structure of AR apps.
Thanks,
Val
You are absolutely correct that referencing Apple's ARKit Developer Documentation will be your best resource as you dive into the world of ARKit and Augmented Reality. I find myself referencing that documentation multiple times per day and it certainly has become my strongest resource. I can recall having many similar questions when I began exploring building AR apps, and I hope that a few thoughts my be helpful as you continue your journey. With that said, please do refer to Apple's Developer Documentation and sample projects first and foremost.
ARKit
ARKit is the underlying framework that handles the "heavy lifting" of Augmented Reality experiences. ARKit configures the camera, gathers the relevant sensor data, and is responsible for detecting and locating the "anchors" that will tether your 3D content to the real world, as seen through the camera. In a sense, Augmented Reality is all about displaying 3D content in the real world, tethering your 3D content to anchors that are tracked and followed, making the 3D content appear as though it truly is in front of your user. As a whole, ARKit does the work to find those anchors, track those anchors, and handles the computations and augmentations to keep your 3D content tethered to those anchors, making the experience seem realistic.
Anchors can come in a variety of forms. Anchors are most commonly planes (a horizontal plane, like a floor, table top, or the ground, or a vertical plane, like a wall, window, or door), but can also be faces (a human face), an image (where you provide your app an image, and when the camera detects that image, that becomes the "anchor" for your 3D content), an object (where you provide your app a 3D object, and when the camera detects that object in the real world, that object becomes the "anchor" for your 3D content), a body (for the purposes of tracking the movement of joints and applying that movement to a 3D character), a location (using ARGeoAnchors, which anchor your 3D content to a specific set of longitude/latitude/altitude coordinates, as a CLLocation from the CoreLocation framework, if in a supported location), or a mesh (if your device has a LiDAR scanner, ARKit becomes capable of detecting more nuanced planes, such as recognizing a floor plane vs. a table-top plane, or a door plane vs. a wall plane). In all, your 3D content has to be anchored to something in the real world, and ARKit handles finding these anchors and providing them to you for your use.
Content Technology
Whereas ARKit handles the heavy lifting of configuring the camera, finding anchors, and tracking those anchors, you have a choice of what type of Content Technology you plan to use to actually render/show your 3D content. The Content Technology is the framework doing the heavy lifting of either loading your 3D model (that you probably created elsewhere, such as a 3D modeling program, or in Reality Composer), or creating 3D content programmatically. There are four main choices for Content Technology;
RealityKit - RealityKit was announced at WWDC 2019 and is the newest of the 3D graphics technologies available in iOS. Much like other 3D technologies available in iOS, RealityKit offers you the ability to load 3D models you may have created in other 3D modeling programs, create 3D content (such as boxes, spheres, text, etc.), as well as create 3D lights, cameras, and more. As described in the RealityKit Documentation, RealityKit allows you to Simulate and render 3D content for use in your augmented reality apps. To your comment, RealityKit complements ARKit; ARKit gathers the information from the camera and sensors, RealityKit renders the 3D content.
Sample Project: Creating Screen Annotations for Objects in an AR Experience
Sample Project: Tracking and Visualizing Faces
Documentation: Providing 2D Virtual Content with SpriteKit
Sample Project: Effecting People Occlusion in Custom Renderers
(Adding a second reply with follow-up thoughts).