Thanks for your response. That doesn't quite answer my question though.
Before ARGeoTrackingConfiguration was available in ARKit 4, we used to use ARWorldTrackingConfiguration and a custom method to periodically calibrate/sync the location between the AR view and our 3D-map scene view, so our scene doesn't drift too much in regards to the AR camera feed, as the device move around 10+ meters.
With the addition of ARGeoTrackingConfiguration and when street imagery/geo-anchors are available on the main streets, we now find the accuracy of ARFrame camera really good, resulting in smooth experience and much less calibration needed. 👍
However, when street imagery aren't available, say we move from a main street into a neighborhood, so the geo-tracking accuracy drops from medium to low or even undetermined, we need to fallback to something.
The question is: should we fallback to a new session with ARWorldTrackingConfiguration and use our old periodic calibration approach, or should we keep using ARGeoTrackingConfiguration and add some custom calibration logic to compensate for the low/undetermined accuracy?
It largely depends on if the session(_:didUpdate:) method in geo-tracking configuration always provides the accuracy as good as or better than world tracking. In other words, is geo-tracking a superset of features of world tracking in this regard?
From our own ad-hoc testing, we cannot tell for sure. 🫤 It almost seems like when street imagery isn't available, geo-tracking's accuracy sometimes is worse than world tracking, though only by a small margin. Also we found that when geo-tracking is trying to re-localize using street imagery, the ARFrame's camera might be unstable for a short period.
To sum up, we want to smoothly handle the case when geo-tracking starts to become unstable, or street imagery based geo-tracking is completely unavailable, to use custom logics to keep our scene aligned with the AR view. Is it possible to achieve it solely with a session configure with ARGeoTrackingConfiguration?