Can ARKit/RealityKit track multiple bodies for animation simultaneously?
Reviewing Apple's CapturingBodyMotionIn3D sample code (for WWDC2019 session), there is no explicit linkage between the ARBodyAnchor and the loaded BodyTrackedEntity (e.g., the AnchorEntity used for the BodyTrackedEntity is not associated with the ARBodyAnchor). There seems to be some hidden linkage between the ARKit ARBodyAnchor and the RealityKit BodyTrackedEntity.
Likewise, the ARFrame has a property for only a single ARBody2D (detectedBody).
My interpretation is that only a single person can be tracked at a time. Is this correct?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
This is probably a minor point because it wouldn't affect distributed binaries, but I thought I'd mention it in case the behavior is unexpected.
After watching WWDC 20202 Explore logging in Swift, I tried some simple examples in a Mac command-line app. I was surprised to see the strings were all printed just fine. There was no redaction. At least when running the program from Xcode.
(Even using the old os_log() approach showed the strings without needing to add %{public}@.)
However, if I run the program from a Terminal shell, the string arguments are properly redacted.
I actually like this behavior (showing more while running in Xcode), but I thought I'd just raise the issue. Sample code and screenshot from Console are shown below.
import Foundation
import os
let logger = Logger(subsystem: "com.example.logging_test", category: "hello")
let greeting = "Hello"
let personName = "World"
logger.log("\(greeting), \(personName)")
logger.log("\(greeting, privacy: .private), \(personName)")
logger.log("\(greeting, privacy: .public), \(personName)")
os_log("%@, %@", greeting, personName)
I have an ARView in nonAR cameraMode and a PerspectiveCamera. When I rotate my iPhone from portrait to landscape mode, the size of the content shrinks.
For example, the attached image shows the same scenes with the phone in portrait and landscape modes. The blue cube is noticeable smaller in landscape. The size of the cube relative to the vertical space (i.e., the height of the view) in each situation is consistent.
Is there a way to keep the scene (e.g., the cube) the same size whether I am in portrait or landscape mode?
I just downloaded the latest Xcode beta, Version 15.0 (15A240d) and ran into some issues:
On start up, I was not given an option to download the Vision simulator.
I cannot create a project targeted at visionOS
I cannot build/run a hello world app for Vision.
In my previous Xcode-beta (Version 15.0 beta 8 (15A5229m)), there was an option to download the vision simulator, and I can create projects for the visionOS and run the code in the vision simulator.
The Xcode file downloaded was named "Xcode" instead of "Xcode-beta". I didn't want to get rid of the exiting Xcode, so I selected Keep Both. Now I have 3 Xcodes in the Applications folder
Xcode
Xcode copy
Xcode-beta
That is the only thing I see that might have been different about my install.
Hardware: Mac Studio 2022 with M1 Max
macOS Ventura 13.5.2
Any idea what I did wrong?
When defining a volumetric WindowGroup, I can set the defaultSize().
It is possible to set a different volume size when opening a window with openWindow()?
In my use case, I want to display potentially different models that are of different sizes inside the volumetric window, and I want to preserve each model's size. I would like to create a volumetric window that is optimally sized for each model.
Alternatively I could create a volumetric window that is large enough to fit the largest model, and then reposition smaller models inside the volume to be at the front & bottom of the volume, but I haven't figured out how to do that either (Post on that question)
In a progressive ImmersiveSpace, I created an object (a cylinder) and applied an OcclusionMaterial to it. It does hide my virtual content behind it, but does not show the content of my room. The cylinder just appears black.
In progressive (or full?) ImmersiveSpace, is it possible to apply occlusion material (or something else), so I can see the room behind the virtual content?
Basically, I want to punch a hole through the virtual content and see the room behind it.
As a practical example, imagine being in a progressive ImmersiveSpace, but you have a plane with an occlusion mesh applied to it above your Apple Magic Keyboard so you can see your keyboard.
Is this possible?
I have a simple visionOS app that uses a RealityView to map floors and ceilings using PlaneDetectionProvider and PlaneAnchors.
I can look at a location on the floor or ceiling, tap, and place an object at that location (I am currently placing a small cube with X-Y-Z axes sticking out at the location).
The tap locations are consistently about 0.35m off along the horizontal plane (it is never off vertically) from where I was looking.
Has anyone else run into the issue of a spatial tap gesture resulting in a location offset from where they are looking?
And if I move to different locations, the offset is the same in real space, so the offset doesn't appear to be associated with the orientation of the Apple Vision Pro (e.g. it isn't off a little to the left of the headset of where I was looking).
Attached is an image showing this. I focused on the corner of the carpet (yellow circle), tapped my fingers to trigger a tap gesture in RealityView, extracted the location, and placed a purple cube at that location.
I stood in 4 different locations (where the orange squares are), looked at the corner of the rug (yellow circle) and tapped. All 4 purple cubes are place at about the same location ~0.35m away from the look location.
Here is how I captured the tap gesture and extracted the 3D location:
var myTapGesture: some Gesture {
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { event in
let location3D = event.convert(event.location3D, from: .global, to: .scene)
let entity = event.entity
model.handleTap(location: location3D, entity: entity)
}
}
Here is how I set the position of the purple cube:
func handleTap(location: SIMD3<Float>, entity: Entity) {
let positionEntity = Entity()
positionEntity.setPosition(location, relativeTo: nil)
...
}
I want to extend an existing macOS app distributed through the Mac App Store with the capability to track the Wi-Fi's noise and signal strength along with the SSID it is connected to over time.
Using CWWiFiClient.shared().interface(), I can get noiseMeasurement() and rssiValue() fine, but ssid() always returns nil.
I am assuming this is a privacy issue (?).
Are there specific entitlements I can request or ways to prompt the user to grant the app privilege to access the SSID values?
I am testing RealityView on a Mac, and I am having troubles controlling the lighting.
I initially add a red cube, and everything is fine. (see figure 1)
I then activate a skybox with a star field, the star field appears, and then the red cube is only lit by the star field.
Then I deactivate the skybox expecting the original lighting to return, but the cube continues to be lit by the skybox. The background is no longer showing the skybox, but the cube is never lit like it originally was.
Is there a way to return the lighting of the model to the original lighting I had before adding the skybox?
I seem to recall ARView's environment property had both a lighting.resource and a background, but I don't see both of those properties in RealityViewCameraContent's environment.
Sample code for 15.1 Beta (24B5024e), Xcode 16.0 beta (16A5171c)
struct MyRealityView: View {
@Binding var isSwitchOn: Bool
@State private var blueNebulaSkyboxResource: EnvironmentResource?
var body: some View {
RealityView { content in
// Create a red cube 10cm on a side
let mesh = MeshResource.generateBox(size: 0.1)
let simpleMaterial = SimpleMaterial(color: .red, isMetallic: false)
let model = ModelComponent(
mesh: mesh,
materials: [simpleMaterial]
)
let redBoxEntity = Entity()
redBoxEntity.components.set(model)
content.add(redBoxEntity)
// Load skybox
let blueNeb2Name = "BlueNeb2"
blueNebulaSkyboxResource = try? await EnvironmentResource(named: blueNeb2Name)
}
update: { content in
if (blueNebulaSkyboxResource != nil) && (isSwitchOn == true) {
content.environment = .skybox(blueNebulaSkyboxResource!)
}
else {
content.environment = .default
}
}
.realityViewCameraControls(CameraControls.orbit)
}
}
Figure 1 (default lighting before adding the skybox):
Figure 2 (after activating skybox with star field; cube is lit by / reflects skybox):
Figure 3 (removing skybox by setting content.environment to .default, cube still reflects skybox; it is hard to see):
In visionOS 2 beta, I have a character loaded from a Reality Composer Pro scene standing on the floor, but he isn't casting a shadow on the floor.
I added a GroundingShadowComponent in RealityView, and he does cast shadows on himself (e.g., his hands cast shadows on his shoes), but I don't see any shadow on the floor.
Do I need to enable something to have my character cast a show on the real-world floor?
I am using RealityView for an iOS program.
Is it possible to turn off the camera passthrough, so only my virtual content is showing? I am looking to create VR experience.
I have a work around where I turn off occlusion and then create a sphere around me (e.g., with a black texture), but in the pre-RealityView days, I think I used something like this:
arView.environment.background = .color(.black)
Is there something similar in RealityView for iOS?
Here are some snippets of my current work around inside RealityView.
First create the sphere to surround the user:
// Create sphere
let blackMaterial = UnlitMaterial(color: .black)
let sphereMesh = MeshResource.generateSphere(radius: 100)
let sphereModelComponent = ModelComponent(mesh: sphereMesh, materials: [blackMaterial])
let sphereEntity = Entity()
sphereEntity.components.set(sphereModelComponent)
sphereEntity.scale *= .init(x: -1, y: 1, z: 1)
content.add(sphereEntity)
Then turn off occlusion:
// Turn off occlusion
let configuration = SpatialTrackingSession.Configuration(
tracking: [],
sceneUnderstanding: [],
camera: .back)
let session = SpatialTrackingSession()
await session.run(configuration)
When testing In-App Purchases in Xcode with a .storekit file, I can delete past purchase transactions, so I can re-test the purchase experience.
I've switched to using a Sandbox tester and made purchases. However, I cannot find how to delete previous purchase transactions made in the sandbox so I can re-run the tests.
Is this possible?
I am finding some unexpected behavior with lights I've been adding to a RealityKit scene.
For example, I created 14 PointLights, but only 8 appeared to be used to illuminate the scene.
In another example, I created 7 PointLights and 7 SpotLights, and the frame rate dropped quite a bit.
Are lights computationally expensive, causing some adaptive behavior by RealityKit?
Should I be judicious in my use of lights for a scene?
(Note: I set arView.environment.lighting.resource to a Skybox with a black image; my goal was to completely control the lighting. I don't know if that added to the computational load)
When setting ARView's environment camera feed exposure to a negative value to make the camera feed dimmer, for example
arView.environment.background = .cameraFeed(exposureCompensation: -3)
can this negatively affect ARKit's ability to track the device's localization and mapping capability?
That is, is the device's use of the camera for SLAM purposes independent of the exposureCompensation value?
I am having troubles placing a model inside a volumetric window.
I have a model - just a simple cube created in Reality Composer Pro that is 0.2m on a side and centered at the origin - and I want to display it in a volumetric window that is 1.0m on a side while preserving the cube's origin 0.2m size.
The small cube seems to be flush against the back and top of the larger volumetric window.
Is it possible to initially position the model inside the volume?
For example, can the model be placed flush against the bottom and front of the volumetric window?
(note: the actual use case is wanting to place 3D terrain (which tends to be mostly flat like a pizza box) flush against the bottom of the volumetric window)