Hello everyone!
My team and I are working on a shared AR experience involving the users' faces.
Upon launch, we want all users to capture their faces on their respective device, i.e. generate a texture that, when applied to ARSCNFaceGeometry
, looks similar to them.
We will then broadcast the textures before starting the session and use them to create digital replicas of everyone.
Can anyone recommend a specific technique to obtain these textures? This step needn't be incredibly efficient since it only happens once. It should, though, produce a high-quality result without blank areas on the face.
My first intuition was to somehow distort a snapshot of the ARView
using the spatial information provided by ARSCNFaceGeometry
. If I understand correctly, textureCoordinates
can be used to map vertices to their corresponding 2D-coordinates in the texture bitmap. How would I approach the transforms concretely, though?
Writing this down has already helped a lot. We would nevertheless appreciate any input. Thanks!
~ Alex
(Note: None of us have prior experience with shaders but are eager to learn if necessary.)
Hello,
I think you will probably find the "Video Texture" mode of this sample code project to be useful: https://developer.apple.com/documentation/arkit/content_anchors/tracking_and_visualizing_faces
Try and take a look at how that is implemented, I think it's probably pretty similar to what you are trying to do!