I am using AccessoryTrackingProvider from ARKit to get the transform of the PSVR2 controller via originFromAnchorTransform of the AccessoryAnchor. I also am trying to use AnchorEntity on the controller using RealityKit
However, none of the three options for Accessory.LocationName, which should be used to define the AnchorEntity target, seem to match the position on the controller which is being sent from ARKit.
The picture attached is showing two transforms:
RealityKit - using .gripSurface to define the AnchoringComponent.Target.accesssory location.
ARKit - using originFromAnchorTransform for AccessoryTrackingProvider.
They are not aligned at the same point.
As for the other options of Accessory.LocationName, using .aim is located at the tip of the controller and .grip is the same position as .gripSurface but with a different orientation.
I am wondering why there is not an option for Accessory.LocationName that actually matches the transform captured by ARKit?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I exported some usd assets from IsaacSim but they are not showing up correctly on my Apple Vision Pro.
Even though the mesh looks to be the correct color in Finder and I can see the Diffuse Color looks correct, the object is still just gray. It should be green!
I have a visionOS app where I instantiate ARKitSession and various providers (HandTrackingProvider and WorldTrackingProvider) in my appModel. That way, I can pass these providers to a Task which runs a gRPC server for sending the data from these providers to a client. When the users enters the immersive space of the app, the ARKitSession will run the providers if they are not running already.
I am now trying to implement the AccessoryTrackingProvider with the PSVR sense controllers but it does not fit with my current framework because the controllers may not be connected when the ARKitSession.run function is called. So I need to find a new place to start the session.
My question is, if I already have a session which is running the hand and world tracking providers, can I start another session to run the accessory tracking? Should they all be running on the same session?
Is there a way to stop the session and restart it when the controllers are connected? When I tried this, I get an error that says "It is not possible to re-run a stopped data provider (<ar_hand_tracking_provider_t: " but if I instantiate a new HandTrackingProvider, then the one that got passed to the gRPC task would no longer be the one running in the new session.
Any advice on how best to manage the various providers and ARKit sessions would be greatly appreciated.
I have my immersive space set up like:
ImmersiveSpace(id: "Theater") {
ImmersiveTeleopView()
.environment(appModel)
.onAppear() {
appModel.immersiveSpaceState = .open
}
.onDisappear {
appModel.immersiveSpaceState = .closed
}
}
.immersionStyle(selection: .constant(appModel.immersionStyle.style), in: .mixed, .full)
Which allows me to set the immersive style while in the space (from a Picker on a SwiftUI window). The scene responds correctly but a lot of the functionality of my immersive space is gone after the change in style; in that I am no longer able to enable/disable entities (which I also have a toggles for in the SwiftUI window). I have to exit and reenter the immersive space to regain the ability to change the enabled state of my entities.
My appModel.immersionStyle is inspired by the Compositor-Services demo (although I am using a RealityView) listed in https://developer.apple.com/documentation/CompositorServices/interacting-with-virtual-content-blended-with-passthrough and looks like this:
public enum IStyle: String, CaseIterable, Identifiable {
case mixedStyle, fullStyle
public var id: Self { self }
var style: ImmersionStyle {
switch self {
case .mixedStyle:
return .mixed
case .fullStyle:
return .full
}
}
}
/// Maintains app-wide state
@MainActor
@Observable
class AppModel {
// Immersion Style
public var immersionStyle: IStyle = .mixedStyle
I would like to visualize a point cloud taken from a lidar. Assuming I can get the XYZ values of every point (of which there may be hundreds or thousands), what is the most efficient way for me to create a point cloud using this information?
I have a simple visionOS app that creates an Entity, writes it to the device, and then attempts to load it. However, when the entity file get overwritten, it affects the ability for the app to load it correctly.
Here is my code for saving the entity.
import SwiftUI
import RealityKit
import UniformTypeIdentifiers
struct ContentView: View {
var body: some View {
VStack {
ToggleImmersiveSpaceButton()
Button("Save Entity") {
Task {
// if let entity = await buildEntityHierarchy(from: urdfPath) {
let type = UTType.realityFile
let filename = "testing.\(type.preferredFilenameExtension ?? "bin")"
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
let fileURL = documentsURL.appendingPathComponent(filename)
do {
let mesh = MeshResource.generateBox(size: 1, cornerRadius: 0.05)
let material = SimpleMaterial(color: .blue, isMetallic: true)
let modelComponent = ModelComponent(mesh: mesh, materials: [material])
let entity = Entity()
entity.components.set(modelComponent)
print("Writing \(fileURL)")
try await entity.write(to: fileURL)
} catch {
print("Failed writing")
}
}
}
}
.padding()
}
}
Every time I press "Save Entity", I see a warning similar to:
Writing file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality
Failed to set dependencies on asset 1941054755064863441 because NetworkAssetManager does not have an asset entity for that id.
When I open the immersive space, I attempt to load the same file:
import SwiftUI
import RealityKit
import UniformTypeIdentifiers
struct ImmersiveView: View {
@Environment(AppModel.self) private var appModel
var body: some View {
RealityView { content in
guard
let type = UTType.realityFile.preferredFilenameExtension
else {
return
}
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
let fileURL = documentsURL.appendingPathComponent("testing.\(type)")
guard FileManager.default.fileExists(atPath: fileURL.path) else {
print("❌ File does not exist at path: \(fileURL.path)")
return
}
if let entity = try? await Entity(contentsOf: fileURL) {
content.add(entity)
}
}
}
}
I also get errors after I overwrite the entity (by pressing "Save Entity" after I have successfully loaded it once). The warnings that appear when the Immersive space attempts to load the new entity are:
Asset 13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'.
Asset 8308977590385781534 Scene (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Scene_0.compiledscene failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to read archive entry.
AssetLoadRequest failed because asset failed to load '13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh' (Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'.)
The order of operations to make this happen:
Launch app
Press "Save Entity" to save the entity
"Open Immersive Space" to view entity
Press "Save Entity" to overwrite the entity
"Open Immersive Space" to view entity, failed asset load request
Also
Launch app, the entity should still be save from last time the app ran
"Open Immersive Space" to view entity
Press "Save Entity" to overwrite the entity
"Open Immersive Space" to view entity, failed asset load request
NOTE: It appears I can get it to work slightly better by pressing the "Save Entity" button twice before attempting to view it again in the immersive space.
I have created a simple app that enters and exits an immersive space. I have not changed the basic code that gets created when you start a new visionOS project.
I have connected a Magic Mouse to the AVP using BT.
I have added a simple call to print(GCMouse.mice())and I have also tried print(GCController.controllers()) when the RealityView is launched inside the body closure. If I do not have the GCMouse/GCController call, everything works fine.
However, with this perfect storm (Magic Mouse connected, enter immersive space, call GCMouse/GCController, exit immersive space) the onDisappear closure is never called, so I cannot reset my ToggleImmersiveSpace button out of disabled state due to transition.
Once I power off and disconnect the mouse, the onDisappear closure is finally called.
I have attempted to profile the issue, and I see that right before the onDisappear is called after the controllers are disconnected (which is long after the dismissImmersiveSpace was called), I see two mentions of some deallocation/destruction of GameController:
-[GCMouse.cxx_destruct]
-[GCPhysicalInputProfile(Pooling) release]
I would like to translate info in a three.js based web app as a 3D model in a volumetric window. Is it possible to do this in a similar manner as loading a web page in a WKWebView?
I see no way to scale an entity with a hover effect.
The closest I can find is by using HoverEffectComponent with a shader hover effect. Maybe I can change the scale with a ShaderGraph, but I cannot figure out how.
I have an attachment anchored to the head motion, and I put a WKWebView as the attachment. When I try to interact with the web view, the app crashes with the following errors:
*** Assertion failure in -[UIGestureGraphEdge initWithLabel:sourceNode:targetNode:directed:], UIGestureGraphEdge.m:28
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid parameter not satisfying: targetNode'
*** First throw call stack:
(0x18e529340 0x185845e80 0x192c2283c 0x2433874d4 0x243382ebc 0x2433969a8 0x24339635c 0x243396088 0x243907760 0x2438e4c94 0x24397b488 0x24397e28c 0x243976a20 0x242d7fdc0 0x2437e6e88 0x2437e6254 0x18e4922ec 0x18e492230 0x18e49196c 0x18e48bf3c 0x18e48b798 0x1d3156090 0x2438c8530 0x2438cd240 0x19fde0d58 0x19fde0a64 0x19fa5890c 0x10503b0bc 0x10503b230 0x2572247b8)
libc++abi: terminating due to uncaught exception of type NSException
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid parameter not satisfying: targetNode'
*** First throw call stack:
(0x18e529340 0x185845e80 0x192c2283c 0x2433874d4 0x243382ebc 0x2433969a8 0x24339635c 0x243396088 0x243907760 0x2438e4c94 0x24397b488 0x24397e28c 0x243976a20 0x242d7fdc0 0x2437e6e88 0x2437e6254 0x18e4922ec 0x18e492230 0x18e49196c 0x18e48bf3c 0x18e48b798 0x1d3156090 0x2438c8530 0x2438cd240 0x19fde0d58 0x19fde0a64 0x19fa5890c 0x10503b0bc 0x10503b230 0x2572247b8)
terminating due to uncaught exception of type NSException
Message from debugger: killed
This is the code for the RealityView
struct ImmersiveView: View {
@Environment(AppModel.self) private var appModel
var body: some View {
RealityView { content, attachments in
let anchor = AnchorEntity(AnchoringComponent.Target.head)
if let sceneAttachment = attachments.entity(for: "test") {
sceneAttachment.position = SIMD3<Float>(0,0,-3.5)
anchor.addChild(sceneAttachment)
}
content.add(anchor)
} attachments: {
Attachment(id: "test") {
WebViewWrapper(webView: appModel.webViewModel.webView)
}
}
}
}
This is the appModel:
import SwiftUI
import WebKit
/// Maintains app-wide state
@MainActor
@Observable
class AppModel {
let immersiveSpaceID = "ImmersiveSpace"
enum ImmersiveSpaceState {
case closed
case inTransition
case open
}
var immersiveSpaceState = ImmersiveSpaceState.closed
public let webViewModel = WebViewModel()
}
@MainActor
final class WebViewModel {
let webView = WKWebView()
func loadViz(_ addressStr: String) {
guard let url = URL(string: addressStr) else { return }
webView.load(URLRequest(url: url))
}
}
struct WebViewWrapper: UIViewRepresentable {
let webView: WKWebView
func makeUIView(context: Context) -> WKWebView {
webView
}
func updateUIView(_ uiView: WKWebView, context: Context) {
}
}
and finally the ContentView where I added a button to load the webpage:
struct ContentView: View {
@Environment(AppModel.self) private var appModel
var body: some View {
VStack {
ToggleImmersiveSpaceButton()
Button("Go") {
appModel.webViewModel.loadViz("http://apple.com")
}
}
.padding()
}
}
I have been using ARKit to get hand tracking data on a continuous loop by implementing the AnchorUpdateSequence.
I want to try out the .predicted hand tracking, but it seems as though using ARKit session and HandTrackingProvider do not allow me to enable this feature?
I have an app that uses GameController to read the inputs of a connected JoyCon. However, the controller also interacts with the OS.
For example, when I press the Home button on the controller, it brings me to the home of my Vision Pro.
Is it possible to disable this interaction while still being able to read the controller inputs inside my app?
I have a grpc server running inside of a task. When the user takes the headset off, the grpc server will no longer work when they put the headset back on.
I would like to have this action detected so that I can cancel the task (which will effectively close the grpc server).
I am also using a visual indicator to let the user know if the server is running, but it will not accurately reflect the state of the server when removing and putting back on the headset.
Topic:
Spatial Computing
SubTopic:
General
I have been playing around with the idea of drawing directly onto the pixels of the Vision Pro, as I am working on a telepresence app that streams a live stereoscopic feed from an articulated robot neck to the wearer.
I was playing around in the Compositor Services demo and modified it to show the following.
I created a grid pattern using normalized device coordinates (-1 to 1) and it looks great when it shows up in the simulator as shown below.
I wanted to see the effects of lens distortion on the image so I launched this script inside the actual Vision Pro, it seems that each eye has only a portion of this screen visible. I have included a screen capture of a screen recording inside of the Vision Pro when running this modified app.
The lines appear straight, which says to me that there must be some automatic pre-distortion correction applied (similar to the image shown below taken from an AVP teardown that I cannot link here).
However, I am wondering why the grid appears cropped and what the bounds of the frame are defined by?
When I first install and run the app, it requests authorization for hand tracking data. But then if I go to the settings and disable hand tracking from the app, it no longer requests. The output of requestAuthorization(for:) method just says [handTracking : denied]
Any idea why the push request only shows up once then never again?