As a follow-up to my earlier question about shared world anchors: I should clarify that my app does not need to pin datasets to fixed points in space. Each time a dataset is opened, I create a new world anchor directly in front of the user who opened it.
The main requirement is this:
Remote participants (joining via FaceTime as spatial personas) should see their persona aligned to the same anchor as local participants. For example, if a persona points to the dataset locally, it should appear to point at the same data in the remote participant’s view.
This is exactly what Apple’s demo shows when a remote participant grabs the plane model and translates it — the motion stays synchronized with local participants.
So my question is:
Do SharedWorldAnchors work with remote spatial personas, or is Apple’s demo using a different synchronization mechanism?
If anchors don’t sync across remote participants, what’s the recommended approach to keep spatial personas aligned to the same content?
Thanks in advance for any insights!
— Jens
Topic:
Spatial Computing
SubTopic:
General
Tags: