IPad4 pro Lidar feature points

Hi,
I want to use the feature points calculated in ARKit for my application. I am already using that information through rawFeaturePoints.

So I want to know if the rawFeaturePoints are calculated more robustly using the lidar information provided in IPad4 pro and iOS 14.

If I add the below code before run the view's session, is the rawFeaturePoint calculated using 3D information?

Thanks :)

Answered by brandonK212 in 629887022
Perhaps someone from Apple will be able to comment on the technical specifications, but I just ran a test on iPhone 11 Pro Max and iPad Pro (2nd Generation), effectively sitting in the same position and panning each device the same direction. While the documentation for rawFeaturePoints does indicate that;

ARKit does not guarantee that the number and arrangement of raw feature points will remain stable between software releases, or even between subsequent frames in the same session. 

I would say my experience between both devices is extremely similar in terms of gathered rawFeaturePoints on a per-frame basis. My assumption may be wrong, but I do not believe the LiDAR scanner contributes to the rawFeaturePoints. Rather, the LiDAR scanner is generating ARMeshAnchors, effectively re-constructing the environment seen by the LiDAR scanner using a series of meshes. While the rawFeaturePoints loosely use the contours of real-world objects as an understanding of the environment and to create points for anchoring 3D content, the ARMeshAnchors provide a faster and more robust understanding of the environment, creating more realistic 3D experiences.

Depending on your use case, that information may or may not be helpful, but hope that a quick tests does yield some thoughts.
I am also very curious of the above question.

I am using ARWorldMap and it is crucial for me if employing Lidar helps in map creation and later its re-application.

Can anyone comment on this?
Accepted Answer
Perhaps someone from Apple will be able to comment on the technical specifications, but I just ran a test on iPhone 11 Pro Max and iPad Pro (2nd Generation), effectively sitting in the same position and panning each device the same direction. While the documentation for rawFeaturePoints does indicate that;

ARKit does not guarantee that the number and arrangement of raw feature points will remain stable between software releases, or even between subsequent frames in the same session. 

I would say my experience between both devices is extremely similar in terms of gathered rawFeaturePoints on a per-frame basis. My assumption may be wrong, but I do not believe the LiDAR scanner contributes to the rawFeaturePoints. Rather, the LiDAR scanner is generating ARMeshAnchors, effectively re-constructing the environment seen by the LiDAR scanner using a series of meshes. While the rawFeaturePoints loosely use the contours of real-world objects as an understanding of the environment and to create points for anchoring 3D content, the ARMeshAnchors provide a faster and more robust understanding of the environment, creating more realistic 3D experiences.

Depending on your use case, that information may or may not be helpful, but hope that a quick tests does yield some thoughts.
IPad4 pro Lidar feature points
 
 
Q