I have an immersive space with a RealityKit view which is running an ARKitSession to access main camera frames.
This frame is processed with custom computer vision algorithms (and deep learning models).
There is a 3D Entity in the RealityKit view which I'm trying to place in the world, but I want to debug my (2D) algorithms in an "attached" view (display images in windows).
How to I send/share data or variables between the views (and and spaces)?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
How to find main (left) camera transform from world anchor? (Enterprise API)
From CameraFrameProvider() I can get a frame sample which has an "extrinsics" parameter. How is it defined? Relative to what point/anchor?
Does anyone know if ExecuTorch is officially supported or has been successfully used on visionOS? If so, are there any specific build instructions, example projects, or potential issues (like sandboxing or memory limitations) to be aware of when integrating it into an Xcode project for the Vision Pro?
While ExecuTorch has support for iOS, I can't find any official documentation or community examples specifically mentioning visionOS.
Thanks.