Post

Replies

Boosts

Views

Activity

Ability to add initial camera pose to PhotogrammetrySample (Feature Request)
The new object capture API is really quite remarkable, and I can already think of several possible use cases and pipelines for it. It is great that even though it can work with just 2D images the API can use metadata like a depth maps, lens data, gravity direction, and GPS location to create a better understanding of the scene and reconstruction. In a situation where the rough camera pose of each captured image is known (for example an ARKit tracked frame, or a static array of cameras) I would love the ability to add an initial camera transform matrix to each PhotogrammetrySample. Without assuming too much about how the underlying system works, I assume that the camera extrinsics and intrinsics are constantly being refined as the model is being reconstructed. In this situation I would not expect the input camera poses to be taken as absolute values. But I think being able to give those initial matrix transform values would have several benefits: Give an even better hint for the camera positions than just the gravity vector would alone. Define object scale based on camera extrinsics, even if no depth data is available. Predefine a coordinate space, origin and orientation for the capture. Have a common and consistent origin point and orientation between scans with an identical (or similar) camera setup. Without having tried a drone capture yet perhaps there is a way to do this with GPS data. But that feels like an unnecessary hack-a-round, and likely prone to conversion errors.
1
0
921
Jun ’21
Sample App not available - Object Capture for iOS
The sample application and code does not seem to be available in the WWDC app or in the documentation along with other Object Capture samples. Where/when will this be released?
Replies
2
Boosts
3
Views
1k
Activity
Jun ’23
ObjectCapture API not working in Ventura beta 4
This error is reported as soon as a PhotogrammetrySession starts processing. libc++abi: terminating with uncaught exception of type std::runtime_error: Failed to access model resource path This happens both with a custom application and the provided example command line demo.
Replies
8
Boosts
1
Views
1.4k
Activity
Aug ’22
Required device capabilities for Lidar
For apps built specifically for the new Lidar sensor that have little to no use on devices without it, is there an appropriate Required device capabilities string?
Replies
2
Boosts
0
Views
1.4k
Activity
Feb ’22
ObjectCapture API has an arbitrary limit of 1000 PhotogrammetrySamples
Any images or PhotogrammetrySamples after 1000 will be rejected and ignored. This is regardless of image resolution, bit depth, and format. This restriction is still present in macOS 12.0.1. Please remove this restriction.
Replies
1
Boosts
0
Views
655
Activity
Oct ’21
What is the proper way to pass custom metadata to PhotogrammetrySample?
Which of these would work/be best practice? sample.metadata = ["FocalLengthIn35mmFilm": "28mm"] sample.metadata = ["FocalLengthIn35mmFilm": 28.0] sample.metadata = ["kCGImagePropertyExifFocalLenIn35mmFilm": "28MM"]
Replies
0
Boosts
0
Views
622
Activity
Jun ’21
Ability to add initial camera pose to PhotogrammetrySample (Feature Request)
The new object capture API is really quite remarkable, and I can already think of several possible use cases and pipelines for it. It is great that even though it can work with just 2D images the API can use metadata like a depth maps, lens data, gravity direction, and GPS location to create a better understanding of the scene and reconstruction. In a situation where the rough camera pose of each captured image is known (for example an ARKit tracked frame, or a static array of cameras) I would love the ability to add an initial camera transform matrix to each PhotogrammetrySample. Without assuming too much about how the underlying system works, I assume that the camera extrinsics and intrinsics are constantly being refined as the model is being reconstructed. In this situation I would not expect the input camera poses to be taken as absolute values. But I think being able to give those initial matrix transform values would have several benefits: Give an even better hint for the camera positions than just the gravity vector would alone. Define object scale based on camera extrinsics, even if no depth data is available. Predefine a coordinate space, origin and orientation for the capture. Have a common and consistent origin point and orientation between scans with an identical (or similar) camera setup. Without having tried a drone capture yet perhaps there is a way to do this with GPS data. But that feels like an unnecessary hack-a-round, and likely prone to conversion errors.
Replies
1
Boosts
0
Views
921
Activity
Jun ’21