Thank you for your reply.
My first question is, does the size of the capturedImage for the front facing camera increase according to the increase of the video format of ARFaceTrackingConfiguration?
My second question is, displayTransform seems to be the prevalent way of converting the coordinates of the capturedImage to the camera image onscreen for ARKit. However, the method uses the normalized image coordinates from (0, 0) to (1, 1) which means shrinking an the captured image drastically and impacting the resolution negatively:
let normalizeTransform: CGAffineTransform = CGAffineTransform(scaleX: 1.0 / imageSize.width, y: 1.0 / imageSize.height)
Do you have any recommendation on how to achieve the coordinate conversion without such a drastic measure? My main objective is to convert the coordinate, orientation, and the size of the capturedImage to those of the image on screen.
Topic:
Spatial Computing
SubTopic:
ARKit
Tags: