Right, it’s good that DrawableQueue exists. I’d also like to be able to generate meshes dynamically to simulate vertex animations, but I’ve not seen a DrawableQueue equivalent for same-frame generation of buffer data to update for a RealityKit mesh. Is it possible? I’m not sure the bridge really exists for that. If it did, that would be an improvement to RealityKit for sure.
Actually it would be really helpful to see an example of how to do this sort of synchronization.
The reason you need to update meshes and not just textures is that I believe this is the only way you could still compose with the passthrough video and occlusion. Otherwise it’s just going to render flat images. A direct link between say a MTLBuffer region for vertices and indices and a RealityKit mesh would make things pretty okay.
It makes me think a much simpler bare-bones API could be made for streaming custom Metal code results into a secure fixed function pipeline of sorts.
Either way, I don’t think this is a long-term solution, but but it’s something. I’ll still file feedback when the visionOS tag appears, as you suggested.
EDIT: Couldn’t I just use completion handlers on Metal and RealityKit? I’m not sure if RealityKit lets you get a completion handler or block it as you normally would maybe block metal using semaphores. —if I can just tell RealityKit to render on command. That would achieve what I’m looking for. —but the extra copying from buffers wouldn’t be as good as a direct link between metal buffers and RealityKit meshes.
Topic:
Graphics & Games
SubTopic:
General
Tags: