I am trying to use the new LiDAR scanner on the iPhone 12 Pro in order to gather points clouds which later on will be used as input data for neural networks.
Since I am relatively new to the field of computer vision and augmented reality, I started by looking at the official code examples (e.g., Visualizing a Point Cloud Using Scene Depth - https://developer.apple.com/documentation/arkit/environmental_analysis/visualizing_a_point_cloud_using_scene_depth) and the documentation of ARKit, SceneKit, Metal and so. However, I still do not understand how to get the LiDAR data.
I found another thread in this forum (Exporting Point Cloud as 3D PLY Model - https://developer.apple.com/forums/thread/658109) and the given solution works so far. However, I do not understand that code in detail, unfortunately. So I am not sure if this gives me really the raw LiDAR data or if some (internal) fusion with other (camera) data is happening, since I could not figure out where the data comes from exactly in the example.
Could you please give me some tips or code examples on how to work with/access the LiDAR data? It would be very much appreciated!
6
0
15k