Post

Replies

Boosts

Views

Activity

Reply to Eye tracking permission
@Matt Cox I have similar qualms about the limitations on custom rendering. I think a lot of this could be partially-solved by, as you suggest, allowing for mesh streaming as opposed to just texture streaming. A better solution would be permitting custom metal rendering outside of fully-immersive mode. I can imagine composition services + Metal having special visionOS CPU-side Metal calls that allow the programmer to specify where to render the camera data/what to occlude. For custom shaders (which we really will need at some point since surface shaders are pretty limiting), there'd need proper sandboxing so reading the color/depth of the camera couldn't leak back to the CPU. Some kind of Metal-builtin read/function-pointer support? I think you ought to file a feature request, for what it's worth. We're not the only ones who've raised this point. Pointing to specific examples probably helps a bit.
Topic: App & System Services SubTopic: Core OS Tags:
Mar ’24
Reply to Non-convex collision?
The solution, I think, is to decompose the concave mesh into convex meshes. Then, if this is a static mesh, then you're in luck because optimal performance doesn't matter much if you just want a result in a reasonable amount of time. Resave it as a collection of meshes for reloading in the future. If it's a dynamic mesh, you're kind of stuck doing this at runtime. I think this is a very normal thing to do. Concave collision detection is more expensive than convex. Note: I think it would be useful to include some built-in algorithm for handling this both in Metal Performance and RealityKit APIs. (Maybe file a feature request?)
Topic: Spatial Computing SubTopic: General Tags:
Mar ’24
Reply to Metal API on visionOS?
I think it would be great for a future OS version to find a way to extend Metal custom renderers to passthrough mode. Specifically the mode with one application — not shared space — for simplicity and sandboxing from other apps. This would allow for many shader effects that are impossible with just surface shaders. Note—without asking for access to the camera. Just some way to take advantage of occlusion and seeing the real world automatically. I imagine you’d need to disable pixel readback or opaquely insert the privacy-restricted textures.
Topic: Graphics & Games SubTopic: General Tags:
Feb ’24
Reply to JavaScript Core Optimization on Mobile?
@eskimo BrowserKit would be perfect, and is in fact overkill, but I guess this is just in the EU due to the new regulations. Too bad. I really just wanted to use JS as a stand-in for a scripting layer like Lua. It's unclear however: does BrowserKit even exist beyond iOS (only that is listed) and does it fail to work even if I am not uploading to the app store? For example, there could be utility in having a web-based scripting layer just for local development.
Topic: Safari & Web SubTopic: General Tags:
Feb ’24
Reply to JavaScript Core Optimization on Mobile?
I want the ability to use hooks into native code for functionality that doesn’t exist in the safari browser, so wkwebview doesn’t work. For example, WebXR with AR passthrough does not exist, even with flags. Passing info from native to wkwebview incurs too much latency for working with interactive time (it’s been shown). So I think the only open is JSCore. Really I just want to be able to script with optimized JS and don’t need the browser part.
Topic: Safari & Web SubTopic: General Tags:
Feb ’24
Reply to Restore window positions on visionOS
Wouldn’t it make sense to save windows’ positions relative to each other under a root hierarchy, rather than having them overlap upon re-launching the app? You could have the windows appear relative to the user’s gaze, with the local hierarchy preserved. In other words, in the visionOS internals, save the cluster of windows under a root transform that saves the positions. When the user returns to the app, restore the windows relative to the user’s gaze, but use the saved hierarchy of windows to position them as they were before, just repositioned with respect to the user’s new initial viewpoint.
Topic: App & System Services SubTopic: Core OS Tags:
Aug ’23
Reply to How to Convert a MTLTexture into a TextureResource?
I need a solution that uses textures I’ve created using a regular Metal renderer, not textures that are drawables. i.e. I need to arbitrary size textures (could be many of them) that could be applied in a realitykit scene. If DrawableQueue is usable somehow for this case (arbitrary resolution textures, many of them, per-frame updating), would someone have an example? The docs do not show anything specific. Thanks!
Topic: Graphics & Games SubTopic: General Tags:
Aug ’23