I have been able to take a static audio file and implement it as an AudioFileResource attached to an entity in my RealityKit scene. But I currently am using AVAudioEngine to change pitch based on relative position, and ideally I would be able to spatialize the sounds I'm generating relative to 3D points. I have experimented with AVAudioPlayerNode sourceMode, but as far as I can tell there is no way to tie an AVAudioPlayerNode to a 3D point in my AR scene unless I can attach my audio to a RealityKit AudioResource. Is that possible? Is there some other way to do it?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have a navigation controller with screens leading up to a RealityKit scene in an ARView. Once the view loads and my model is in the scene, I call arView.installGestures(_, for:) to add translation and rotation gestures to my model. This works just fine, unless I pop my view controller to a previous screen and return to my ARView. The model loads, the gestures appear to be attached to the ARView identically to the first scenario, but in this case the gestures no longer work, or are possibly shifted out of position from the model. Sometimes I can kind of reactivate the gestures by touching areas around the model and then it works to touch the model too. But it is very inconsistent.
I will also file a bug report, but I'm wondering if there is simply some cleanup I need to do before exiting the view controller to ensure the gestures work every time.
It looks like for nonAR configurations, it is possible to add a PerspectiveCamera to a RealityKit scene; is there is a way to do kind of a picture-in-picture layout with a live camera feed showing one perspective and using a PerspectiveCamera to somehow simultaneously show a second angle on the same virtual content? Short of that, is it feasible to record my AR data and play it back in a nonAR configuration using the PerspectiveCamera?
In my application, we want to add an optional RealityKit experience for our customers who have upgraded their OS without forcing them to do so. I was going through my code adding @available attributes to all of my classes that accessed iOS 13-specific features, but I discovered in the process that a bunch of my errors were in the generated RealityKit class files which I cannot modify. Is there any way to produce a build which includes RealityKit when the target version is below iOS 13?
I am trying to do a hit test of sorts between a person in my ARFrame and a RealityKit Entity. So far I have been able to use the position value of my entity and project it to a CGPoint which I can match up with the ARFrame's segmentationBuffer to determine whether a person intersects with that entity. Now I want to find out if that person is at the same depth as that entity. How do I relate the SIMD3 position value for the entity, which is in meters I think, to the estimatedDepthData value?
In ARKit 3, person segmentation with depth and body detection frame semantics seem to be mutually exclusive. It crashes returning the error: "This set of frame semantics is not supported on this configuration." Is this still the case in ARKit 4?