Hello,
I am an individual developer working on a macOS application using SwiftUI and RealityKit.
I would like to understand the feasibility of face-related tracking on macOS when using an external USB camera, compared to iOS/iPadOS.
Specifically:
• Does macOS provide an ARKit Face Tracking–equivalent API (e.g., real-time facial expressions, gaze direction, depth)?
• If not, is it common to rely on Vision / AVFoundation as alternatives for:
• Facial expression coefficients
• Gaze estimation
• Depth approximation
• In an environment without dedicated sensors such as TrueDepth, is it correct to assume that accurate depth data and high-fidelity blend shape extraction are realistically difficult?
Any clarification on official limitations, recommended alternatives, or relevant documentation would be greatly appreciated.
Thank you.
Topic:
App & System Services
SubTopic:
Hardware