Thanks a lot for your response.
Sorry, I have not enoughly described my goal.
I work in a HCI research lab, so I will probably try to do things that are not conventionnal.
My full idea is to stream another XR scene on top of a current RealityKit scene. For that purpose, I try to
- get a realtime stream of the other scene, rendered side-by-side, that I apply on a plane with a material that render for each eye anchored to the head
- send to the other scene inputs (head pose, hands, etc...) to keep in sync the rendering
This way, even if I anchor the texture to the head, It will be updated depending on head pose like a normal xr scene, and will just act as a "scene layer".
Ideally, I would also put depth into the stream texture to be also able to render at the right depth and no more do a scene layer but kind of scene fusion. In some way, I'll try to reproduce at a higher level some kind of compositor between multiple immersive scenes.
I currently think that I can only do that with Compositor Services without having access to the nearPlane of camera.
On other thing that I can probably do inside a RealityKit is to render remote scene like a volume scene (by always orienting the plane to the viewer and update the rendering depending on the head pose ; more like a portal)
Thanks for pointing me to the ModelSortGroupComponent, always useful!
Thanks again for your time.