What I discovered is that you can indeed use RealityKit's head anchor to virtually attach things to the user's head (by attaching entities as children to the head anchor).
However the head anchor's transform is not exposed, it always remains at identity. Child entities will correctly move with the head, but if you query their global position (or orientation) using position(relativeTo:nil), you just get back their local transform.
Which means that it seems currently impossible to write any RealityKit systems that react to the user's position (for example a character looking at the user, like the dragon in Apple's own demo), without getting it from ARKit and injecting it into the RealityView via the update closure.
I don't know if this is a bug or a conscious design decision. My guess is that it was an early design decision that should have been but wasn't revised later. That initially the thinking was that the user's head transform should be hidden for privacy reasons. But later engineers realized that there are applications (like fully custom Metal renderers) that absolutely need the user's head pose, so it was exposed after all via ARKit.
It is probably worth filing a feedback on this, because I can't see how it would make sense to hide the head anchor's transform, if the same information is accessible in a less performant and convenient way already.