I think I understand. Are you referring to the case in which we're in a shared space with several different applications running in the same environment in the "shared space?"
Correct me if I am wrong, but additionally Isn't there also an AR passthrough mode in which just one application is running at one time -- just as there's a full immersive mode that uses CompositorServices?
If so, then that might be a starting place. If Metal could run with passthrough enabled with the restriction that the application is the only thing running, then that would make sense as a restriction. That is, it makes sense that you'd have more control over the visual style when you're not potentially conflicting with the visual styles of other applications, and also from a security standpoint when your application is isolated from others.
The challenge would then be less about security, but rather about how to support a custom raytracing / lighting function and occlusion with the camera passthrough enabled. I think this is an easier problem to solve because it's just more composition and perhaps extensions to Metal (speculating).
Yes, I understand about the need for more specific use cases. I felt I needed more context behind why the current system behaves the way it does.
For the sake of putting it in-writing here for others who might read this too -- while it's fresh in-mind (before the visionOS feedback tag appears), a lot of the use cases I have are more general and philosophical.
Higher-level general needs:
Many people have existing renderers in Metal that they want to reuse.
Metal allows for the use and experimentation with new and different graphics algorithms that can help optimize for specific needs. There is a large community of graphics programmers who prefer to "do the work themselves," not only for optimization, but also for control over 1> the visual style and expression of the rendering, and 2> the data structures for the renderer and the program that feeds data to the renderer. There isn't a "one-size fits all." For example, I often need to create a lot of per-frame procedural generated meshes. RealityKit prefers static meshes it seems. The suggestion I've received is to create a new MeshResource per frame. This does not scale well.
Apple probably should not be burdened with implementing new algorithms every time something new comes out. It doesn't scale well to put everything on Apple's renderer. I think that one should choose to use the RealityKit renderer or make their own if RealityKit doesn't fit the person's needs.
My Use Cases
Procedural generation of meshes, textures, and assets. Use of bindless rendering. Use of render to texture. All of these are cumbersome in RealityKit at the moment despite the existence of things like DrawableQueue. Extra work needs to be done to synchronize and generate assets. Lots of copying around, it seems. Overall, there's a lot of friction involved.
I want to be able to do vertex transformations in the shader in arbitrary ways, which currently only possible in either Metal or CustomMaterial.
I want to use traditional render-to-texture, masking, scissoring, etc. but RealityKit makes this hard and unintuitive.
RealityKit has its own entity-component / object system. My projects already have their own entity systems. Mapping to RealityKit's way of storing and managing data is a lot of redundant work and potentially non-optimal for performance. There's a lot of cognitive overhead to forcing my project to conform to this specific renderer when my own renderer and engine are optimized for what I want to make. (i.e. it would be useful to decouple a lot of the higher-level functionality in RealityKit.)
For spatial computing / VR, I want the ability to move around the environment, which in Vision Pro's case, is only possible in AR mode. This is a complementary issue. If VR/immersive mode were eventually to permit walking around in some future version, then that would be great.
As in the general cases above, I have interest in creating stylistically-nuanced graphics that don't necessarily map with what RealityKit's lighting models are doing. I want creative freedom.
Overall, I do like the idea of having something like RealityKit available, but if Metal or a slightly modified version of Metal with extensions or restrictions were made available, that would make it easier to create unique and optimized applications on the platform.
Anyway, thanks again for your time.