Post

Replies

Boosts

Views

Activity

Photogrammetry Session - new model?
Hi Apple Team, We noticed the following exciting changelog in the latest macOS 26 beta: A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451) However after trying this on the latest beta and running some tests we do not see any differences on objects with low textures such as single coloured surfaces. Is there anything we are missing? the machine is definitely connected to the internet but we have no way of knowing from the logs if the new model is being used? thanks
5
1
521
1w
Does anyone actually notice any improvements using the new ObjectCaptureSession with PhotogrammetrySession?
We have implemented all the recent additions Apple made for this on the iOS side for guided capture using Lidar and image data via ObjectCaptureSession. After the capture finishes we are sending our images to PhotogrammetrySession on macOS to reconstruct models in higher quality (Medium) than the Preview quality that is currently supported on iOS. We have now done a few side by side captures of using the new ObjectCapureSession vs using the traditional capture via the AvFoundation framework but have not seen any improvements that were claimed during the session that Apple hosted at WWDC. As a matter of fact we feel that the results are actually worse because the images obtained through the new ObjectCaptureSession aren't as high quality as the images we get from AvFoundation. Are we missing something here? Is PhotogrammetrySession on macOS not using this new additional Lidar data or have the improvements been overstated? From the documentation it is not clear at all how the new Lidar data gets stored and how that data transfers. We are using iOS 17 beta 4 and macOS Sonoma Beta 4 in our testing. Both codebases have been compiled using Xcode 15 Beta 5.
1
0
855
Jul ’23
SceneKit - error Thread 1: "*** -[NSPathStore2 stringByAppendingPathExtension:]: nil argument
We are trying to save scene into usdz by using scene?.write method , which seems to work as expected until iOS 17. in iOS 17 we are getting error Thread 1: "*** -[NSPathStore2 stringByAppendingPathExtension:]: nil argument which seems to be because of scenekit issue attaching StackTrace screenshot for reference we have used updated method for url in scene?.write(to : url, delegate:nil) where url has been generated using .appending(path: String) method
2
3
1.1k
Jun ’23
usdzconvert realworld scale is off
Hi,Been trying to use the command line usdzconvert script to do conversion straight from OBJ + textures to usdz, however the scaling is completely off. Even at 1000% scale up in AR view the model is still much smaller than the actual size.This also happens when we do the conversion from the gltf file instead of OBJ.This doesn't happen however when using the Reality Converter app.Has anyone else run into this? Can anyone from Apple reproduce this?cheersMarkus
2
0
1.7k
Jun ’20