Hello since updating to beta 3 the sculpting sample app doesn't work it crashes on running.
seems to be something in AnchorEntity or AccessoryAnchoringSource
Referenced from: <00B81486-1A74-30A0-B75B-4B39E3AF57DF> /private/var/containers/Bundle/Application/3D2EBF59-19F0-4BF4-8567-6962AA36A2C6/delete.app/delete.debug.dylib
Expected in: <BAA9B221-78A1-3B99-AA2F-B8DFCD179FC7> /System/Library/Frameworks/RealityFoundation.framework/RealityFoundation
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello
RemoteDeviceIdentifier returns nil and therefore it crashes the HoverEffect sample project.
I have vision26 beta 2 on both devices
what the correct method of running this code sample ?
I'm trying to implement a 3D mixer and update AVAudioEnvironmentNode's position based on a 3D Model
but doesn't seem to do anything of the device, seems to work on simulator tho
Surface screen position
does it return model's vertices XYZ position normalized?
node graph needs more tutorials and explanations
made 0 progress
I see the documentation says visionOS,
but when I run the code it return nil on nextDrawable()
The documentation is very sparse
Hello,
im trying to add RT shadows to my deferred rendering engine and I been stuck for days now.
Using MPSRayIntersector and following apple's example on
https://developer.apple.com/documentation/metalperformanceshaders/animating_and_denoising_a_raytraced_scene
Here's my scene with 2 Models
please help
thank you !
The idea of using AVFoundation is because higher resolution depth maps and more accurate instead of ARKit, but me trying it on iPhone 12,13 pro isn't the case.
the filtered map jumps too much from depth values and the unfiltered depth map has a lot of holes and sometimes It maybe go all black.
I'm I missing something? how I should be testing it?
I'm able to get the distance between two points using ARKit easy but with AVFoundation
the data doesn't make sense.
I get wrong values
Any help
thank you!
trying to build computer vision using the lidar
I see no sample project that's being shown in mesh shaders WWDC video.
I want to use room plan api alongside with scene reconstruction api at the same time. but doesn't seem to work
I want to capture a high resolution photo but with builtInUltraWideCamera, is that possible to configure using configurableCaptureDeviceForPrimaryCamera?
thank you
Most of my GPU developing time is dealing with bad xml/json font files instead of doing actual GPU work.
many ways of rendering text, all of them have their caveats.
Hello,
After each PhotogrammetrySession completed successfully there's some memory not being release.
After a couple PhotogrammetrySessions it fills all my RAM memory.
How do you completely release all memory for the the next PhotogrammetrySession ?
Thank you