I see no way to scale an entity with a hover effect.
The closest I can find is by using HoverEffectComponent with a shader hover effect. Maybe I can change the scale with a ShaderGraph, but I cannot figure out how.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have an attachment anchored to the head motion, and I put a WKWebView as the attachment. When I try to interact with the web view, the app crashes with the following errors:
*** Assertion failure in -[UIGestureGraphEdge initWithLabel:sourceNode:targetNode:directed:], UIGestureGraphEdge.m:28
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid parameter not satisfying: targetNode'
*** First throw call stack:
(0x18e529340 0x185845e80 0x192c2283c 0x2433874d4 0x243382ebc 0x2433969a8 0x24339635c 0x243396088 0x243907760 0x2438e4c94 0x24397b488 0x24397e28c 0x243976a20 0x242d7fdc0 0x2437e6e88 0x2437e6254 0x18e4922ec 0x18e492230 0x18e49196c 0x18e48bf3c 0x18e48b798 0x1d3156090 0x2438c8530 0x2438cd240 0x19fde0d58 0x19fde0a64 0x19fa5890c 0x10503b0bc 0x10503b230 0x2572247b8)
libc++abi: terminating due to uncaught exception of type NSException
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid parameter not satisfying: targetNode'
*** First throw call stack:
(0x18e529340 0x185845e80 0x192c2283c 0x2433874d4 0x243382ebc 0x2433969a8 0x24339635c 0x243396088 0x243907760 0x2438e4c94 0x24397b488 0x24397e28c 0x243976a20 0x242d7fdc0 0x2437e6e88 0x2437e6254 0x18e4922ec 0x18e492230 0x18e49196c 0x18e48bf3c 0x18e48b798 0x1d3156090 0x2438c8530 0x2438cd240 0x19fde0d58 0x19fde0a64 0x19fa5890c 0x10503b0bc 0x10503b230 0x2572247b8)
terminating due to uncaught exception of type NSException
Message from debugger: killed
This is the code for the RealityView
struct ImmersiveView: View {
@Environment(AppModel.self) private var appModel
var body: some View {
RealityView { content, attachments in
let anchor = AnchorEntity(AnchoringComponent.Target.head)
if let sceneAttachment = attachments.entity(for: "test") {
sceneAttachment.position = SIMD3<Float>(0,0,-3.5)
anchor.addChild(sceneAttachment)
}
content.add(anchor)
} attachments: {
Attachment(id: "test") {
WebViewWrapper(webView: appModel.webViewModel.webView)
}
}
}
}
This is the appModel:
import SwiftUI
import WebKit
/// Maintains app-wide state
@MainActor
@Observable
class AppModel {
let immersiveSpaceID = "ImmersiveSpace"
enum ImmersiveSpaceState {
case closed
case inTransition
case open
}
var immersiveSpaceState = ImmersiveSpaceState.closed
public let webViewModel = WebViewModel()
}
@MainActor
final class WebViewModel {
let webView = WKWebView()
func loadViz(_ addressStr: String) {
guard let url = URL(string: addressStr) else { return }
webView.load(URLRequest(url: url))
}
}
struct WebViewWrapper: UIViewRepresentable {
let webView: WKWebView
func makeUIView(context: Context) -> WKWebView {
webView
}
func updateUIView(_ uiView: WKWebView, context: Context) {
}
}
and finally the ContentView where I added a button to load the webpage:
struct ContentView: View {
@Environment(AppModel.self) private var appModel
var body: some View {
VStack {
ToggleImmersiveSpaceButton()
Button("Go") {
appModel.webViewModel.loadViz("http://apple.com")
}
}
.padding()
}
}
I have created a simple app that enters and exits an immersive space. I have not changed the basic code that gets created when you start a new visionOS project.
I have connected a Magic Mouse to the AVP using BT.
I have added a simple call to print(GCMouse.mice())and I have also tried print(GCController.controllers()) when the RealityView is launched inside the body closure. If I do not have the GCMouse/GCController call, everything works fine.
However, with this perfect storm (Magic Mouse connected, enter immersive space, call GCMouse/GCController, exit immersive space) the onDisappear closure is never called, so I cannot reset my ToggleImmersiveSpace button out of disabled state due to transition.
Once I power off and disconnect the mouse, the onDisappear closure is finally called.
I have attempted to profile the issue, and I see that right before the onDisappear is called after the controllers are disconnected (which is long after the dismissImmersiveSpace was called), I see two mentions of some deallocation/destruction of GameController:
-[GCMouse.cxx_destruct]
-[GCPhysicalInputProfile(Pooling) release]
I have been using ARKit to get hand tracking data on a continuous loop by implementing the AnchorUpdateSequence.
I want to try out the .predicted hand tracking, but it seems as though using ARKit session and HandTrackingProvider do not allow me to enable this feature?
I have an app that uses GameController to read the inputs of a connected JoyCon. However, the controller also interacts with the OS.
For example, when I press the Home button on the controller, it brings me to the home of my Vision Pro.
Is it possible to disable this interaction while still being able to read the controller inputs inside my app?
I have a grpc server running inside of a task. When the user takes the headset off, the grpc server will no longer work when they put the headset back on.
I would like to have this action detected so that I can cancel the task (which will effectively close the grpc server).
I am also using a visual indicator to let the user know if the server is running, but it will not accurately reflect the state of the server when removing and putting back on the headset.
Topic:
Spatial Computing
SubTopic:
General
I have been playing around with the idea of drawing directly onto the pixels of the Vision Pro, as I am working on a telepresence app that streams a live stereoscopic feed from an articulated robot neck to the wearer.
I was playing around in the Compositor Services demo and modified it to show the following.
I created a grid pattern using normalized device coordinates (-1 to 1) and it looks great when it shows up in the simulator as shown below.
I wanted to see the effects of lens distortion on the image so I launched this script inside the actual Vision Pro, it seems that each eye has only a portion of this screen visible. I have included a screen capture of a screen recording inside of the Vision Pro when running this modified app.
The lines appear straight, which says to me that there must be some automatic pre-distortion correction applied (similar to the image shown below taken from an AVP teardown that I cannot link here).
However, I am wondering why the grid appears cropped and what the bounds of the frame are defined by?
When I first install and run the app, it requests authorization for hand tracking data. But then if I go to the settings and disable hand tracking from the app, it no longer requests. The output of requestAuthorization(for:) method just says [handTracking : denied]
Any idea why the push request only shows up once then never again?
I am new to learning about concurrency and I am working on an app that uses the HandTrackingProvider class.
In the Happy Beam sample code, there is a HearGestureModel which has a reference to the HandTrackingProvider() and this seems to write to a struct called HandUpdates inside the HeartGestureModel class through the publishHandTrackingUpdates() function. On another thread, there is a function called computeTransformofUserPerformedHeartGesture() which reads the values of the HandUpdates to determine whether the user is making the appropriate gesture.
My question is, how is the code handling the constant read and write to the HandUpdates struct?