Thanks for this reminder. Quick question on that, in the past I had tried adding ARAnchors to coincide with the user's position at various points in time as they moved around an environment. I found that the transforms of these anchors were never updated during the tracking session (or maybe moved by a minuscule amount). Is there a way to use ARAnchors that increases the likelihood they will be moved during pose graph updates? From what I know about graph SLAM, It seems at a minimum you want to be able to communicate to ARKit that a particular ARAnchor represents the pose of the user at some point in time. In this case, if the graph SLAM ever shifts this position during its optimization, the ARAnchor could move accordingly. The issue I see is that there doesn't seem to be anyway to communicate to ARKit that the ARAnchor should represent the pose of the phone at some point in time (rather than just some arbitrary content at a particular location in space).
Is there a way to use ARAnchors in a way that allows one to access the results of graph SLAM so that one can recover the pose of the phone at points other than the current frame (i.e., from some time point in the past)? Similarly, if I have some content that I'd like to place on a planar surface, what's the best practice with that? Should I create my own ARPlaneAnchor? Should I just store the coordinates of my object in the ARPlaneAnchor's coordinate system?
Thanks in advance for any information. I have always struggled to access the results of the ARKit SLAM algorithm at the level of granularity that I'd like.
Also, I'm having trouble adding line breaks, so my apologies on that.