Post

Replies

Boosts

Views

Activity

Reply to Eye tracking permission
@Matt Cox I have similar qualms about the limitations on custom rendering. I think a lot of this could be partially-solved by, as you suggest, allowing for mesh streaming as opposed to just texture streaming. A better solution would be permitting custom metal rendering outside of fully-immersive mode. I can imagine composition services + Metal having special visionOS CPU-side Metal calls that allow the programmer to specify where to render the camera data/what to occlude. For custom shaders (which we really will need at some point since surface shaders are pretty limiting), there'd need proper sandboxing so reading the color/depth of the camera couldn't leak back to the CPU. Some kind of Metal-builtin read/function-pointer support? I think you ought to file a feature request, for what it's worth. We're not the only ones who've raised this point. Pointing to specific examples probably helps a bit.
Topic: App & System Services SubTopic: Core OS Tags:
Mar ’24
Reply to Non-convex collision?
The solution, I think, is to decompose the concave mesh into convex meshes. Then, if this is a static mesh, then you're in luck because optimal performance doesn't matter much if you just want a result in a reasonable amount of time. Resave it as a collection of meshes for reloading in the future. If it's a dynamic mesh, you're kind of stuck doing this at runtime. I think this is a very normal thing to do. Concave collision detection is more expensive than convex. Note: I think it would be useful to include some built-in algorithm for handling this both in Metal Performance and RealityKit APIs. (Maybe file a feature request?)
Topic: Spatial Computing SubTopic: General Tags:
Mar ’24
Reply to Metal API on visionOS?
I think it would be great for a future OS version to find a way to extend Metal custom renderers to passthrough mode. Specifically the mode with one application — not shared space — for simplicity and sandboxing from other apps. This would allow for many shader effects that are impossible with just surface shaders. Note—without asking for access to the camera. Just some way to take advantage of occlusion and seeing the real world automatically. I imagine you’d need to disable pixel readback or opaquely insert the privacy-restricted textures.
Topic: Graphics & Games SubTopic: General Tags:
Feb ’24
Reply to JavaScript Core Optimization on Mobile?
@eskimo BrowserKit would be perfect, and is in fact overkill, but I guess this is just in the EU due to the new regulations. Too bad. I really just wanted to use JS as a stand-in for a scripting layer like Lua. It's unclear however: does BrowserKit even exist beyond iOS (only that is listed) and does it fail to work even if I am not uploading to the app store? For example, there could be utility in having a web-based scripting layer just for local development.
Topic: Safari & Web SubTopic: General Tags:
Feb ’24
Reply to JavaScript Core Optimization on Mobile?
I want the ability to use hooks into native code for functionality that doesn’t exist in the safari browser, so wkwebview doesn’t work. For example, WebXR with AR passthrough does not exist, even with flags. Passing info from native to wkwebview incurs too much latency for working with interactive time (it’s been shown). So I think the only open is JSCore. Really I just want to be able to script with optimized JS and don’t need the browser part.
Topic: Safari & Web SubTopic: General Tags:
Feb ’24
Reply to Restore window positions on visionOS
Wouldn’t it make sense to save windows’ positions relative to each other under a root hierarchy, rather than having them overlap upon re-launching the app? You could have the windows appear relative to the user’s gaze, with the local hierarchy preserved. In other words, in the visionOS internals, save the cluster of windows under a root transform that saves the positions. When the user returns to the app, restore the windows relative to the user’s gaze, but use the saved hierarchy of windows to position them as they were before, just repositioned with respect to the user’s new initial viewpoint.
Topic: App & System Services SubTopic: Core OS Tags:
Aug ’23
Reply to How to Convert a MTLTexture into a TextureResource?
I need a solution that uses textures I’ve created using a regular Metal renderer, not textures that are drawables. i.e. I need to arbitrary size textures (could be many of them) that could be applied in a realitykit scene. If DrawableQueue is usable somehow for this case (arbitrary resolution textures, many of them, per-frame updating), would someone have an example? The docs do not show anything specific. Thanks!
Topic: Graphics & Games SubTopic: General Tags:
Aug ’23
Reply to Eye tracking permission
@Matt Cox I have similar qualms about the limitations on custom rendering. I think a lot of this could be partially-solved by, as you suggest, allowing for mesh streaming as opposed to just texture streaming. A better solution would be permitting custom metal rendering outside of fully-immersive mode. I can imagine composition services + Metal having special visionOS CPU-side Metal calls that allow the programmer to specify where to render the camera data/what to occlude. For custom shaders (which we really will need at some point since surface shaders are pretty limiting), there'd need proper sandboxing so reading the color/depth of the camera couldn't leak back to the CPU. Some kind of Metal-builtin read/function-pointer support? I think you ought to file a feature request, for what it's worth. We're not the only ones who've raised this point. Pointing to specific examples probably helps a bit.
Topic: App & System Services SubTopic: Core OS Tags:
Replies
Boosts
Views
Activity
Mar ’24
Reply to Non-convex collision?
The solution, I think, is to decompose the concave mesh into convex meshes. Then, if this is a static mesh, then you're in luck because optimal performance doesn't matter much if you just want a result in a reasonable amount of time. Resave it as a collection of meshes for reloading in the future. If it's a dynamic mesh, you're kind of stuck doing this at runtime. I think this is a very normal thing to do. Concave collision detection is more expensive than convex. Note: I think it would be useful to include some built-in algorithm for handling this both in Metal Performance and RealityKit APIs. (Maybe file a feature request?)
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Mar ’24
Reply to Non-convex collision?
Are you able to access the mesh vertices, indices, etc with the API? Worst case you could create convex meshes from the concave mesh yourself.
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Mar ’24
Reply to Vision Pro - lets join forces to improve VisionOS platform
Custom metal rendering pipelines in single-app passthrough mode with occlusion/depth, custom shaders, composition with the real world. (Note: not asking for camera access — this could be handled by the OS backend however possible. For example, sandbox pixel read in the shaders to avoid insecure access to the data on the CPU.)
Replies
Boosts
Views
Activity
Mar ’24
Reply to Non-convex collision?
@JayDev85 Are you able to access the mesh vertices, indices, etc? Worst case you could create convex meshes from the concave mesh yourself.
Topic: Spatial Computing SubTopic: General Tags:
Replies
Boosts
Views
Activity
Mar ’24
Reply to Metal API on visionOS?
I think it would be great for a future OS version to find a way to extend Metal custom renderers to passthrough mode. Specifically the mode with one application — not shared space — for simplicity and sandboxing from other apps. This would allow for many shader effects that are impossible with just surface shaders. Note—without asking for access to the camera. Just some way to take advantage of occlusion and seeing the real world automatically. I imagine you’d need to disable pixel readback or opaquely insert the privacy-restricted textures.
Topic: Graphics & Games SubTopic: General Tags:
Replies
Boosts
Views
Activity
Feb ’24
Reply to JavaScript Core Optimization on Mobile?
@eskimo BrowserKit would be perfect, and is in fact overkill, but I guess this is just in the EU due to the new regulations. Too bad. I really just wanted to use JS as a stand-in for a scripting layer like Lua. It's unclear however: does BrowserKit even exist beyond iOS (only that is listed) and does it fail to work even if I am not uploading to the app store? For example, there could be utility in having a web-based scripting layer just for local development.
Topic: Safari & Web SubTopic: General Tags:
Replies
Boosts
Views
Activity
Feb ’24
Reply to JavaScript Core Optimization on Mobile?
I want the ability to use hooks into native code for functionality that doesn’t exist in the safari browser, so wkwebview doesn’t work. For example, WebXR with AR passthrough does not exist, even with flags. Passing info from native to wkwebview incurs too much latency for working with interactive time (it’s been shown). So I think the only open is JSCore. Really I just want to be able to script with optimized JS and don’t need the browser part.
Topic: Safari & Web SubTopic: General Tags:
Replies
Boosts
Views
Activity
Feb ’24
Reply to Metal Shading Language vs Reality Composer Pro Shader Graph vs Unity Shader Graph vs MaterialX Shader
Chiming in to say that this would be an excellent reason to support custom Metal shaders in the future to allow for easier porting of applications like this.
Topic: App & System Services SubTopic: Core OS Tags:
Replies
Boosts
Views
Activity
Feb ’24
Reply to Restore window positions on visionOS
Wouldn’t it make sense to save windows’ positions relative to each other under a root hierarchy, rather than having them overlap upon re-launching the app? You could have the windows appear relative to the user’s gaze, with the local hierarchy preserved. In other words, in the visionOS internals, save the cluster of windows under a root transform that saves the positions. When the user returns to the app, restore the windows relative to the user’s gaze, but use the saved hierarchy of windows to position them as they were before, just repositioned with respect to the user’s new initial viewpoint.
Topic: App & System Services SubTopic: Core OS Tags:
Replies
Boosts
Views
Activity
Aug ’23
Reply to Xcode 15 Beta Swift/C++ Interop with C++20
I’m not familiar enough to help with the problem, I suppose. I agree, it’s weird.
Topic: Programming Languages SubTopic: Swift Tags:
Replies
Boosts
Views
Activity
Aug ’23
Reply to Xcode 15 Beta Swift/C++ Interop with C++20
@AndrewKaster Wait are we sure Apple clang supports C++ modules? If I recall, there is only partial support, meaning I wouldn‘t expect it to work so well now. I suspect your solution is not to use modules.
Topic: Programming Languages SubTopic: Swift Tags:
Replies
Boosts
Views
Activity
Aug ’23
Reply to Xcode 15 Beta Swift/C++ Interop with C++20
Have you changed the compiler flags in the project settings/targets? If std=c++20 doesn’t work, try 2b. Also, check the feature support pages. char8_t requires Xcode 15.
Topic: Programming Languages SubTopic: Swift Tags:
Replies
Boosts
Views
Activity
Aug ’23
Reply to How to Convert a MTLTexture into a TextureResource?
I need a solution that uses textures I’ve created using a regular Metal renderer, not textures that are drawables. i.e. I need to arbitrary size textures (could be many of them) that could be applied in a realitykit scene. If DrawableQueue is usable somehow for this case (arbitrary resolution textures, many of them, per-frame updating), would someone have an example? The docs do not show anything specific. Thanks!
Topic: Graphics & Games SubTopic: General Tags:
Replies
Boosts
Views
Activity
Aug ’23
Reply to When using ARKit, why can’t you get the front-facing and back-facing camera feeds at once?
Might an engineer comment? I wonder if this is a reasonable feature request or it this is a strong limitation.
Topic: Spatial Computing SubTopic: ARKit Tags:
Replies
Boosts
Views
Activity
Aug ’23