@Vision Pro Engineer I’m not sure if you’ll see this, but one more follow-uo question: is the remote immersive mode purely for one-way interaction for display on the vision pro without user interaction, or is there a way for the vision pro part of the app to receive user input and have the mac part react to that user input bidirectionally? It’s unclear what information the mac side gets. I suppose in the worst case I could use networking code to send vision-pro-side user input if that works as normal, but where would I write vision pro user input code in the remote immersive space mode? Can I refer to existing examples for regular immersive mode, more or less, or is there some kind of limitation?
If I missed this functionality, then whoops. Clarity would be appreciated though.
Topic:
Spatial Computing
SubTopic:
General
Tags: