Post

Replies

Boosts

Views

Activity

Reply to Unexpected Behavior: PointerEvents do not permit simultaneous pencil and multitouch at the same time. Discussing Workarounds
NOTE: I read the spec: https://www.w3.org/TR/pointerevents3/#the-primary-pointer "Some devices, operating systems and user agents may ignore the concurrent use of more than one type of pointer input to avoid accidental interactions. For instance, devices that support both touch and pen interactions may ignore touch inputs while the pen is actively being used, to allow users to rest their hand on the touchscreen while using the pen (a feature commonly referred to as "palm rejection"). Currently, it is not possible for authors to suppress this behavior." Since the iPad can handle simultaneous touch-types natively, I don't see why the web version cannot. Please lift this restriction/think about a way to lift it.
Topic: Safari & Web SubTopic: General Tags:
Jan ’25
Reply to WWDC 25 RemoteImmersiveSpace - Support for Passthrough Mode? RealityKit?
@Vision Pro Engineer Thanks Ricardo. That helps. It’s too bad about passthrough mode not being supported and RealityKit not being supported. Are these ingerent limitations, or are they potential things a feedback request would be useful for? Could you share the reason mixed mode wasn’t possible? Can you think of any potential temporary workarounds? I can imagine it being nice to do some extremely expensive preprocessing of geometry using compute and then transferring results to the vision pro, regardless of the immersion mode. Is there another api I’m unaware of that could be used just for really quickly transferring arbitrary buffer data to visionOS? Something easier than just using a network connection. Imagine doing some geometry processing on macOS and sending results to the Vision Pro.
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25
Reply to WWDC 25 RemoteImmersiveSpace - Support for Passthrough Mode? RealityKit?
@Vision Pro Engineer I’m not sure if you’ll see this, but one more follow-uo question: is the remote immersive mode purely for one-way interaction for display on the vision pro without user interaction, or is there a way for the vision pro part of the app to receive user input and have the mac part react to that user input bidirectionally? It’s unclear what information the mac side gets. I suppose in the worst case I could use networking code to send vision-pro-side user input if that works as normal, but where would I write vision pro user input code in the remote immersive space mode? Can I refer to existing examples for regular immersive mode, more or less, or is there some kind of limitation? If I missed this functionality, then whoops. Clarity would be appreciated though.
Topic: Spatial Computing SubTopic: General Tags:
Jun ’25