Hi Apple Team,
We noticed the following exciting changelog in the latest macOS 26 beta:
A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451)
However after trying this on the latest beta and running some tests we do not see any differences on objects with low textures such as single coloured surfaces. Is there anything we are missing? the machine is definitely connected to the internet but we have no way of knowing from the logs if the new model is being used?
thanks
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Anybody has noticed pivot issue in constructed model through object capture.
Ideally pivot of object should be centre of bounding box but with new macOS changes now pivot is at 0,0,0 (below the bounding box)
Here is a quick comparison
Old v/s new
We have implemented all the recent additions Apple made for this on the iOS side for guided capture using Lidar and image data via ObjectCaptureSession.
After the capture finishes we are sending our images to PhotogrammetrySession on macOS to reconstruct models in higher quality (Medium) than the Preview quality that is currently supported on iOS.
We have now done a few side by side captures of using the new ObjectCapureSession vs using the traditional capture via the AvFoundation framework but have not seen any improvements that were claimed during the session that Apple hosted at WWDC.
As a matter of fact we feel that the results are actually worse because the images obtained through the new ObjectCaptureSession aren't as high quality as the images we get from AvFoundation.
Are we missing something here? Is PhotogrammetrySession on macOS not using this new additional Lidar data or have the improvements been overstated? From the documentation it is not clear at all how the new Lidar data gets stored and how that data transfers.
We are using iOS 17 beta 4 and macOS Sonoma Beta 4 in our testing. Both codebases have been compiled using Xcode 15 Beta 5.
We are trying to save scene into usdz by using scene?.write method , which seems to work as expected until iOS 17.
in iOS 17 we are getting error Thread 1: "*** -[NSPathStore2 stringByAppendingPathExtension:]: nil argument which seems to be because of scenekit issue attaching StackTrace screenshot for reference
we have used updated method for url in scene?.write(to : url, delegate:nil) where url has been generated using .appending(path: String) method
Is there any developer documentation on how we can make use of this new 2x mode on the Iphone 14 Pro?
Are both TIFFs and DNG (Apple ProRAW format) currently not supported?
Will you make the GUI sample app that was used during the session available as well?
thanks!
USD added draco compression support in 19.11.
is this something that apple is also considering to adopt?
Hi,Been trying to use the command line usdzconvert script to do conversion straight from OBJ + textures to usdz, however the scaling is completely off. Even at 1000% scale up in AR view the model is still much smaller than the actual size.This also happens when we do the conversion from the gltf file instead of OBJ.This doesn't happen however when using the Reality Converter app.Has anyone else run into this? Can anyone from Apple reproduce this?cheersMarkus
Is there a way to lock down the scale of a model in AR quicklook?