Post

Replies

Boosts

Views

Activity

External Peripheral Support on Vision Pro?
Does the Vision Pro allow usb peripherals like cameras, microphones, or video feeds from an iPhone or iPad? Can I use AVFoundation to access external camera feeds or microphones? Note that I am not asking about the internal cameras, which I am aware are off-limits. One use case is to support multiple viewing angles comparable to what we do with slide projectors. For example, draw using an iPad flat on your desk while wearing the Vision Pro in full passthrough mode. Simultaneously mirror the iPad’s screen on multiple walls in real-time at minimum latency (by thunderbolt connection), similar to how I can use Quicktime in macOS to mirror my iPad’s screen.
1
0
645
Feb ’24
Sample Code for visionOS Metal?
There is a project tutorial for visionOS Metal rendering in immersive mode here (https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal?language=objc), but there is no downloadable sample project. Would Apple please provide sample code? The set-up is non-trivial.
4
1
2.8k
Sep ’23
Xcode 15 Beta Bug? Breakpoints Duplicating/multiplying
In Xcode 15 Beta (5), I am noticing my breakpoints randomly seem to duplicate themselves multiple times for the exact same breakpoint. I have 3 targets in my project, and I wonder whether what I am experiencing is a bug related to that. Similarly, I also see duplicates of the same symbol in the symbol navigator. I've attached a screenshot of several identical breakpoints (in this case placed in some Objective C methods that relate to speech recognition). I haven't seen this happen in Xcode 14, or at least as often. Has anyone else experienced this and/or filed a bug report? I've tried deleting derivedData and the usual tricks.
0
0
1.2k
Aug ’23
When is full hand tracking available on Vision Pro?
I’m still a little unsure about the various spaces and capabilities. I’d like to make full use of hand tracking, joints and all. In the mode with passthrough and a single application present (not a shared space), is that available? (I am pretty sure that the answer is “yes,” but I’d like to confirm.) What is this mode called in the system? Mixed full-space?
1
0
1.2k
Jul ’23
SFSpeechRecognizer specify timeout duration?
For my project, I would really benefit from continuous on-device speech recognition without the automatic timeout, or at least with a much longer one. In the WebKit web speech implementation, it looks like there are some extra setters for SFSpeechRecognizer exposing exactly this functionality: https://github.com/WebKit/WebKit/blob/8b1a13b39bbaaf306c9d819c13b0811011be55f2/Source/WebCore/Modules/speech/cocoa/WebSpeechRecognizerTask.mm#L105 Is there a chance Apple could enable programmable duration/time-out? If it’s available in WebSpeech, then why not in native applications?
0
0
916
Jul ’23
OSLog Request: split long output into structured and stdio instead of dropping long printouts entirely?
OSLog’s structured logging is nice, but the output length is limited compared with stdio’s. Currently, it looks like if I expect long variable-length print-outs, I’m forced to revert to using stdio. —or is this just an Xcode 15 beta 2 bug (discussed in the release notes) and fixed versions will match what stdio gives me? If not, could there be a way to configure oslog to fall-back to stdio dynamicallty based on whether the printout is too long or not? A custom fallback buffer allocator? Alternatively, what if I could still get the structured logging with the metadata, and use stdio for the rest of the message that doesn’t fit? That would be a nice way to guarantee that you get structured logging info without dropping the entire message.
3
0
1.3k
Jul ’23
runtime texture input for ShaderGraphMaterial?
For the MaterialX shadergraph, the given example hard-codes two textures for blending at runtime ( https://developer.apple.com/documentation/visionos/designing-realitykit-content-with-reality-composer-pro#Build-materials-in-Shader-Graph ) Can I instead generate textures at runtime and set what those textures are as dynamic inputs for the material, or must all used textures be known when the material is created? If the procedural texture-setting is possible, how is it done, since the example shows a material with those hard-coded textures? EDIT: It looks like the answer is ”yes” since setParameter accepts textureResources https://developer.apple.com/documentation/realitykit/materialparameters/value/textureresource(_:)?changes=l_7 However, how do you turn a MTLTexture into a TextureResource?
1
2
1.2k
Jul ’23
What is the purpose of cameraoutput in visionOS’s RealityRenderer?
Related to “what you can do in visionOS,” what are all of these camera-related functionalities for? (As of yet, not described in the documentation) https://developer.apple.com/documentation/realitykit/realityrenderer/cameraoutput/colortextures https://developer.apple.com/documentation/realitykit/realityrenderer/cameraoutput/relativeviewport What are the intended use cases? Is this the equivalent to render-to-texture? I also see some interop with raw Metal happening here.
1
0
964
Jun ’23
RealityKit Object Masking / Providing Depth Map for Effects - Use Case Feature Request
In regular Metal, I can do all sorts of tricks with texture masking to create composite objects and effects, similar to CSG. Since for now, AR-mode in visionOS requires RealityKit without the ability to use custom shaders, I'm a bit stuck. I'm pretty sure so far that what I want is impossible and requires a feature request, but here it goes: Here's a 2D example: Say I have some fake circular flashlights shining into the scene, depthwise, and everything else is black except for some rectangles that are "lit" by the circles. The result: How it works: In Metal, my per-instance data contain a texture index for a mask texture. The mask texture has an alpha of 0 for spots where the instance should not be visible, and an alpha of 1 otherwise. So in an initial renderpass, I draw the circular lights to this mask texture. In pass 2, I attach the fullscreen mask texture (circular lights) to all mesh instances that I want hidden in the darkness. A custom fragment shader multiplies the alpha of the full-screen mask texture sample at the given fragment with the color that would otherwise be output. i.e. out_color *= mask.a. The way I have blending and clear colors set-up, wherever the mask alpha is 0, an object will be hidden. The background clear color is black. The following is how the scene looks if I don't attach the masking texture. You can see that behind the scenes, the full rectangle is there. In visionOS AR-mode, the point is for the system to apply lighting, depth, and occlusion information to the world. For my effect to work, I need to be able to generate an intermediate representation of my world (after pass 2) that shows some of that world in darkness. I know I can use Metal separately from RealityKit to prepare a texture to apply to a RealityKit mesh using DrawableQueue However, as far as I know there is no way to supply a full-screen depth buffer for RealityKit to mix with whatever it's doing with the AR passthrough depth and occlusion behind the scenes. So my Metal texture would just be a flat quad in the scene rather than something mixed with the world. Furthermore, I don't see a way to apply a full-screen quad to the scene, period. I think my use case is impossible in visionOS in AR mode without customizable rendering in Metal (separate issue: I still think in single full app mode, it should be possible to grant access to the camera and custom rendering more securely) and/or a RealityKit feature enabling mixing of depth and occlusion textures for compositing. I love these sorts of masking/texture effects because they're simple and elegant to pull-off, and I can imagine creating several useful and fun experiences using this masking and custom depth info with AR passthrough. Please advise on how I could achieve this effect in the meantime. However, I'll go ahead and say a specific feature request is the ability to provide full-screen depth and occlusion textures to RealityKit so it's easier to mix Metal rendering as a pre-pass with RealityKit as a final composition step.
0
2
1.2k
Jun ’23
People Occlusion + Scene Understanding on VisionOS
In ARKit for iPad, I could 1) build a mesh on top of the real world and 2) request a people occlusion map for use with my application so people couls move behind or in fromt of virtual content via compositing. However, in VisionOS, there is no ARFrame image to pass to the function that would generate the occlusion data. Is it possible to do people occlusion in visionOS? If so, how it is done—through a data provider, or is it automatic when passthrough is enabled? If it’s not possible, is this something that might have a solution in future updates as the platform develops? Being able to combine virtual content and the real world with people being able to interact with the content convincingly is a really important aspect to AR, so it would make sense for this to be possible.
4
1
1.9k
Jun ’23
Ground Shadows for in-Program-Generated Meshes in RealityKit
Apparenrly, shadows aren’t generated for procedural geometry in RealityKit: https://codingxr.com/articles/shadows-lights-in-realitykit/ Has this been fixed? My projects tend to involve a lot of procedurally-generated meshes as opposed to importes-models. This will be even more important when VisionOS is out. On a similar note, it used to be that ground shadows were not per-entity. I’d like to enable or disable them per-entity. Is it possible? Since currently the only way to use passthrough AR in Vision OS will be to use RealityKit, more flexibility will be required. I can‘t simply apply my own preferences.
16
1
3.0k
Jun ’23