use camera's intrinsics and extrinsics params, convert image
coordinates to camera 3D coordinates:
float u = imagePoint.x;
float v = imagePoint.y;
simd_float3 cameraPoint;
cameraPoint.x = (u - intrinsics.columns[2].x) * depth / intrinsics.columns[0].x;
cameraPoint.y = (v - intrinsics.columns[2].y) * depth / intrinsics.columns[1].y;
cameraPoint.z = depth;
as this post say :
the extrinsics do not define the transformation from the device anchor to the camera, but from the camera to the device anchor. (Actually extrinsics is a constant value!)
so then transform 3D camera point to point in device anchor coordinates:
simd_float4 cameraPoint4D = simd_make_float4(cameraPoint.x, cameraPoint.y, cameraPoint.z, 1.0);
simd_float4x4 extrinsicsInverse = simd_inverse(extrinsics);
simd_float4 world_point = simd_mul(extrinsicsInverse, cameraPoint4D);
then use device anchor to transform to world position:
simd_float4 world_point = simd_mul(weakSelf.deviceTransform, simd_make_float4(spacePoint.x, spacePoint.y, spacePoint.z, 1.0));
seem everything correct, but don't work, i don't know why, please help
Topic:
Spatial Computing
SubTopic:
ARKit
Tags: