I don't know if Apple will answer this, but from my experience using a number of devices I suspect Apple uses a combination of
Standard wide lens
Ultra wide lens
LiDAR sensor
and merges all this data to understand the world around it and the device's location in that world. The more combination of sensors Apple can leverage on a given device, the more effective the tracking is, so the better the experience is.
A device with a single standard wide lens will do (but you do need to move the device side-to-side to help it triangulate depth). Wide + Ultra wide lens is better. Wide + Ultra wide + LiDAR is pretty sweet.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags: