Hi everyone,
I'm working on a SwiftUI app and need help building a view that integrates the device's camera and uses a pre-trained Core ML model for real-time object recognition. Here's what I want to achieve:
Open the device's camera from a SwiftUI view.
Capture frames from the camera feed and analyze them using a Create ML-trained Core ML model.
If a specific figure/object is recognized, automatically close the camera view and navigate to another screen in my app.
I'm looking for guidance on:
Setting up live camera capture in SwiftUI.
Using Core ML and Vision frameworks for real-time object recognition in this context.
Managing navigation between views when the recognition condition is met.
Any advice, code snippets, or examples would be greatly appreciated!
Thanks in advance!
Selecting any option will automatically load the page