Post

Replies

Boosts

Views

Activity

structured logging is great, but logging fails with variables ?
I wanted to try structured logging with os_log in C++, but I found that it fails to print anything given a format string and a variable: eg. void example(std::string& str) { os_log_info(OS_LOG_DEFAULT, "%s", str.c_str()); os_log_debug(OS_LOG_DEFAULT, "%s", str.c_str()); os_log_error(OS_LOG_DEFAULT, "%s", str.c_str()); } This prints a blank row in the console, but with no text. How is this meant to work with variables? It only works with literals and constants now as far as I can tell. I'm looking forward to getting this working.
10
0
3.1k
Jun ’23
Is Passthrough + Unbounded Volumes + My Own 3D Rendering in Metal Possible?
On VisionOS, is a combination of full passthrough, unbounded volumes, and my own custom 3D rendering in Metal Possible? According to the RealityKit and Unity VisionOS talk, towards the end, it’s shown that an unbounded volume mode allows you to create full passthrough experiences with graphics rendering in 3D — essentially full 3D AR in which you fan move around the space. It’s also shown that you can get occlusion for the graphics. This is all great, however, I don’t want to use RealityKit or Unity in my case. I would like to be able to render to an unbounded volume using my own custom Metal renderer, and still get AR passthrough and the ability to walk around and composit virtual graphical content with the background. To reiterate, this is exactly what is shown in the video using Unity, but I’d like to use my own renderer instead of Unity or RealityKit. This doesn’t require access to the video camera texture, which I know is unavailable. Having the flexibility to create passthrough mode content in a custom renderwr is super important for making an AR experience in which I have control over rendering. One use case I have in mind is: Wizard’s Chess. You see the real world and can walk around a room-size chessboard with virtual chess pieces mixed with the real world, and you can see the other player through passthrough as well. I’d like to render graphics on my living room couches using scene reconstruction mesg anchors, for example, to change the atmosphere. The video already shows several nice use cases like being able to interact with a tabletop fantasy world with characters. Is what I’m describing possible with Metal? Thanks! EDIT: Also, if not volumes, then full spaces? I don’t need access to the camera images that are off-limits. I would just like passthrough + composition with 3D Metal content + full ARKit tracking and occlusion features.
0
0
1.3k
Jun ’23
What is the purpose of cameraoutput in visionOS’s RealityRenderer?
Related to “what you can do in visionOS,” what are all of these camera-related functionalities for? (As of yet, not described in the documentation) https://developer.apple.com/documentation/realitykit/realityrenderer/cameraoutput/colortextures https://developer.apple.com/documentation/realitykit/realityrenderer/cameraoutput/relativeviewport What are the intended use cases? Is this the equivalent to render-to-texture? I also see some interop with raw Metal happening here.
1
0
988
Jun ’23
When is full hand tracking available on Vision Pro?
I’m still a little unsure about the various spaces and capabilities. I’d like to make full use of hand tracking, joints and all. In the mode with passthrough and a single application present (not a shared space), is that available? (I am pretty sure that the answer is “yes,” but I’d like to confirm.) What is this mode called in the system? Mixed full-space?
1
0
1.2k
Jul ’23
When using ARKit, why can’t you get the front-facing and back-facing camera feeds at once?
I’d like to use ARKit world tracking and display both the back camera feed and the front camera feeds, using the front feed as as a PIP. This would work great for an internet streaming use case. However, it’s impossible. As soon as ARKit is told to use one mode, the camera for the other side freezes/doesn’t work. This page also says you have to pick one camera to show: https://developer.apple.com/documentation/arkit/arkit_in_ios/choosing_which_camera_feed_to_augment?language=objc A question to the developers: why is this limitation in-place? Are there any work-arounds for the use case of ARKit world tracking + displaying the back camera feed + displaying the front camera feed as an overlay? It’s possible to do this with plain camera initialization without ARKit. (There’s an official example.) With ARKit, it no longer works. It’s strange that I cannot access the front feed via one of the other frameworks, but I guess that ARKit blocks that.
3
0
1.2k
Dec ’24
Xcode 15 Beta Bug? Breakpoints Duplicating/multiplying
In Xcode 15 Beta (5), I am noticing my breakpoints randomly seem to duplicate themselves multiple times for the exact same breakpoint. I have 3 targets in my project, and I wonder whether what I am experiencing is a bug related to that. Similarly, I also see duplicates of the same symbol in the symbol navigator. I've attached a screenshot of several identical breakpoints (in this case placed in some Objective C methods that relate to speech recognition). I haven't seen this happen in Xcode 14, or at least as often. Has anyone else experienced this and/or filed a bug report? I've tried deleting derivedData and the usual tricks.
0
0
1.2k
Aug ’23
Bug? Xcode 16 macOS 15 SDK on macos 14.5 causes Metal Shader Colors to be Wrong
I've been upgrading Xcode consistently for years and have never seen Metal shaders behave differently from one version to another until now. On macOS 14.5, Xcode 16 beta, suddenly several color outputs turn out completely black where there should be color. All validation is on and nothing seems to be wrong (and hasn't been since maybe Xcode version 11). I've attached two screens. The first is the normal color scheme, the second is in Xcode 16. The settings are the exact same. Normal: Buggy with black + transparent colors (so it seems like either colors are overflowing or are all 0s)? Before I file a bug report or code level request, may I have some thoughts on how to debug this? The only clue I have is that I'm using bindless to multiply color texture samples with color values from my vertex struct. But it still fails even if I use hard-coded values for the texture samples, meaning somehow the color values are not being sent to the shader correctly? This is the most stable part of my rendering pipeline, so I'm surprised if the issue is there. Thank you.
1
0
1.4k
Jun ’24
Ground Shadows for in-Program-Generated Meshes in RealityKit
Apparenrly, shadows aren’t generated for procedural geometry in RealityKit: https://codingxr.com/articles/shadows-lights-in-realitykit/ Has this been fixed? My projects tend to involve a lot of procedurally-generated meshes as opposed to importes-models. This will be even more important when VisionOS is out. On a similar note, it used to be that ground shadows were not per-entity. I’d like to enable or disable them per-entity. Is it possible? Since currently the only way to use passthrough AR in Vision OS will be to use RealityKit, more flexibility will be required. I can‘t simply apply my own preferences.
16
1
3k
Jun ’23
Apple Pencil Pro Squeeze API
I wonder if an Apple engineer could confirm: will the Apple Pencil Pro squeeze functionality be detectable in the current API, or will this be a future iPadOS extension to gesture recognizers / UIKit? I’d like to start playing with the functionality if it’s detected behind an existing event though. (Long press?)
1
0
1.2k
May ’24
Researcher in Spatial Computing / HCI Looking to Use Enterprise APIs on Vision Pro for HCI Research-Only.
I am a spatial computing / XR and Human-Computer Interaction researcher from a private university. I am interested in using the vision pro's newly-exposed camera access to develop and evaluate new algorithms for computational perception. ( WWDC session here: https://developer.apple.com/wwdc24/10139 ) I understand this is targeted at large enterprises, but I would like to know if by some means as a researcher affiliated with an educational institution I could develop private for-development-only applications for the vision pro with the enterprise APIs enabled. The intent is not to publish apps, but rather to contribute to the research community through R&D. However, to my knowledge, I would be ineligible as a normal "business" as I do not employee 100+ employees. I am an independent researcher, and on occasion, I collaborate within small research groups within my university that focus on this kind of camera-based perception algorithm development. Could someone from Apple comment? Thank you.
10
1
2.1k
Jun ’24
Sample Project for WWDC24 10092 Metal with Passthrough?
It’s great that we’ll be able to use Metal custom renderers in passthrough mode on visionOS. https://developer.apple.com/wwdc24/10092 This is a lot of complicated set-up, however. It’s also unclear how occlusion and custom algorithms / raytracing will work in tandem with scene understanding. May we have a project template and/or sample? Preferably with the C api and not just swift. This would be much-appreciated and helpful to everyone who wants this set-up. I’d like to see the whole process. Thank you for introducing this feature!
3
1
1.2k
Nov ’24
WKWebView 120hz Support
I'm developing an application that needs smooth framerates within a wkwebview that interacts with native code. However, requestAnimationFrame by default is still throttled to 60hz even if all my target devices (the iPad Pro for example) have supported 120hz for a long time already. I noticed that the latest Safari in 18.3 beta supports unlocked framerates, but that's only under Safari feature flags. To my knowledge, these flags do not apply to WKWebView. Is there a way to enable unlocked framerate in WKWebView via requestAnimationFrame? (Calling JS at a faster rate from the native code side will not work, almost definitely, since WKWebView will still render at its own rate.) This is an experimental application for internal use and I'm okay if there are temporary beta solutions available.
2
1
824
Jan ’25
RealityKit Object Masking / Providing Depth Map for Effects - Use Case Feature Request
In regular Metal, I can do all sorts of tricks with texture masking to create composite objects and effects, similar to CSG. Since for now, AR-mode in visionOS requires RealityKit without the ability to use custom shaders, I'm a bit stuck. I'm pretty sure so far that what I want is impossible and requires a feature request, but here it goes: Here's a 2D example: Say I have some fake circular flashlights shining into the scene, depthwise, and everything else is black except for some rectangles that are "lit" by the circles. The result: How it works: In Metal, my per-instance data contain a texture index for a mask texture. The mask texture has an alpha of 0 for spots where the instance should not be visible, and an alpha of 1 otherwise. So in an initial renderpass, I draw the circular lights to this mask texture. In pass 2, I attach the fullscreen mask texture (circular lights) to all mesh instances that I want hidden in the darkness. A custom fragment shader multiplies the alpha of the full-screen mask texture sample at the given fragment with the color that would otherwise be output. i.e. out_color *= mask.a. The way I have blending and clear colors set-up, wherever the mask alpha is 0, an object will be hidden. The background clear color is black. The following is how the scene looks if I don't attach the masking texture. You can see that behind the scenes, the full rectangle is there. In visionOS AR-mode, the point is for the system to apply lighting, depth, and occlusion information to the world. For my effect to work, I need to be able to generate an intermediate representation of my world (after pass 2) that shows some of that world in darkness. I know I can use Metal separately from RealityKit to prepare a texture to apply to a RealityKit mesh using DrawableQueue However, as far as I know there is no way to supply a full-screen depth buffer for RealityKit to mix with whatever it's doing with the AR passthrough depth and occlusion behind the scenes. So my Metal texture would just be a flat quad in the scene rather than something mixed with the world. Furthermore, I don't see a way to apply a full-screen quad to the scene, period. I think my use case is impossible in visionOS in AR mode without customizable rendering in Metal (separate issue: I still think in single full app mode, it should be possible to grant access to the camera and custom rendering more securely) and/or a RealityKit feature enabling mixing of depth and occlusion textures for compositing. I love these sorts of masking/texture effects because they're simple and elegant to pull-off, and I can imagine creating several useful and fun experiences using this masking and custom depth info with AR passthrough. Please advise on how I could achieve this effect in the meantime. However, I'll go ahead and say a specific feature request is the ability to provide full-screen depth and occlusion textures to RealityKit so it's easier to mix Metal rendering as a pre-pass with RealityKit as a final composition step.
0
2
1.3k
Jun ’23
Can Xcode 13.4 build to macOS 12.4?
Xcode 13.4 only provides an SDK for macOS 12.3 according to the release notes. Can I build to macOS 12.4 using the lower point version SDK? I would not want to update the OS if I could not build to it yet. Thanks.
Replies
1
Boosts
0
Views
1.2k
Activity
May ’22
structured logging is great, but logging fails with variables ?
I wanted to try structured logging with os_log in C++, but I found that it fails to print anything given a format string and a variable: eg. void example(std::string& str) { os_log_info(OS_LOG_DEFAULT, "%s", str.c_str()); os_log_debug(OS_LOG_DEFAULT, "%s", str.c_str()); os_log_error(OS_LOG_DEFAULT, "%s", str.c_str()); } This prints a blank row in the console, but with no text. How is this meant to work with variables? It only works with literals and constants now as far as I can tell. I'm looking forward to getting this working.
Replies
10
Boosts
0
Views
3.1k
Activity
Jun ’23
Is Passthrough + Unbounded Volumes + My Own 3D Rendering in Metal Possible?
On VisionOS, is a combination of full passthrough, unbounded volumes, and my own custom 3D rendering in Metal Possible? According to the RealityKit and Unity VisionOS talk, towards the end, it’s shown that an unbounded volume mode allows you to create full passthrough experiences with graphics rendering in 3D — essentially full 3D AR in which you fan move around the space. It’s also shown that you can get occlusion for the graphics. This is all great, however, I don’t want to use RealityKit or Unity in my case. I would like to be able to render to an unbounded volume using my own custom Metal renderer, and still get AR passthrough and the ability to walk around and composit virtual graphical content with the background. To reiterate, this is exactly what is shown in the video using Unity, but I’d like to use my own renderer instead of Unity or RealityKit. This doesn’t require access to the video camera texture, which I know is unavailable. Having the flexibility to create passthrough mode content in a custom renderwr is super important for making an AR experience in which I have control over rendering. One use case I have in mind is: Wizard’s Chess. You see the real world and can walk around a room-size chessboard with virtual chess pieces mixed with the real world, and you can see the other player through passthrough as well. I’d like to render graphics on my living room couches using scene reconstruction mesg anchors, for example, to change the atmosphere. The video already shows several nice use cases like being able to interact with a tabletop fantasy world with characters. Is what I’m describing possible with Metal? Thanks! EDIT: Also, if not volumes, then full spaces? I don’t need access to the camera images that are off-limits. I would just like passthrough + composition with 3D Metal content + full ARKit tracking and occlusion features.
Replies
0
Boosts
0
Views
1.3k
Activity
Jun ’23
What is the purpose of cameraoutput in visionOS’s RealityRenderer?
Related to “what you can do in visionOS,” what are all of these camera-related functionalities for? (As of yet, not described in the documentation) https://developer.apple.com/documentation/realitykit/realityrenderer/cameraoutput/colortextures https://developer.apple.com/documentation/realitykit/realityrenderer/cameraoutput/relativeviewport What are the intended use cases? Is this the equivalent to render-to-texture? I also see some interop with raw Metal happening here.
Replies
1
Boosts
0
Views
988
Activity
Jun ’23
When is full hand tracking available on Vision Pro?
I’m still a little unsure about the various spaces and capabilities. I’d like to make full use of hand tracking, joints and all. In the mode with passthrough and a single application present (not a shared space), is that available? (I am pretty sure that the answer is “yes,” but I’d like to confirm.) What is this mode called in the system? Mixed full-space?
Replies
1
Boosts
0
Views
1.2k
Activity
Jul ’23
When using ARKit, why can’t you get the front-facing and back-facing camera feeds at once?
I’d like to use ARKit world tracking and display both the back camera feed and the front camera feeds, using the front feed as as a PIP. This would work great for an internet streaming use case. However, it’s impossible. As soon as ARKit is told to use one mode, the camera for the other side freezes/doesn’t work. This page also says you have to pick one camera to show: https://developer.apple.com/documentation/arkit/arkit_in_ios/choosing_which_camera_feed_to_augment?language=objc A question to the developers: why is this limitation in-place? Are there any work-arounds for the use case of ARKit world tracking + displaying the back camera feed + displaying the front camera feed as an overlay? It’s possible to do this with plain camera initialization without ARKit. (There’s an official example.) With ARKit, it no longer works. It’s strange that I cannot access the front feed via one of the other frameworks, but I guess that ARKit blocks that.
Replies
3
Boosts
0
Views
1.2k
Activity
Dec ’24
Xcode 15 Beta Bug? Breakpoints Duplicating/multiplying
In Xcode 15 Beta (5), I am noticing my breakpoints randomly seem to duplicate themselves multiple times for the exact same breakpoint. I have 3 targets in my project, and I wonder whether what I am experiencing is a bug related to that. Similarly, I also see duplicates of the same symbol in the symbol navigator. I've attached a screenshot of several identical breakpoints (in this case placed in some Objective C methods that relate to speech recognition). I haven't seen this happen in Xcode 14, or at least as often. Has anyone else experienced this and/or filed a bug report? I've tried deleting derivedData and the usual tricks.
Replies
0
Boosts
0
Views
1.2k
Activity
Aug ’23
Bug? Xcode 16 macOS 15 SDK on macos 14.5 causes Metal Shader Colors to be Wrong
I've been upgrading Xcode consistently for years and have never seen Metal shaders behave differently from one version to another until now. On macOS 14.5, Xcode 16 beta, suddenly several color outputs turn out completely black where there should be color. All validation is on and nothing seems to be wrong (and hasn't been since maybe Xcode version 11). I've attached two screens. The first is the normal color scheme, the second is in Xcode 16. The settings are the exact same. Normal: Buggy with black + transparent colors (so it seems like either colors are overflowing or are all 0s)? Before I file a bug report or code level request, may I have some thoughts on how to debug this? The only clue I have is that I'm using bindless to multiply color texture samples with color values from my vertex struct. But it still fails even if I use hard-coded values for the texture samples, meaning somehow the color values are not being sent to the shader correctly? This is the most stable part of my rendering pipeline, so I'm surprised if the issue is there. Thank you.
Replies
1
Boosts
0
Views
1.4k
Activity
Jun ’24
Ground Shadows for in-Program-Generated Meshes in RealityKit
Apparenrly, shadows aren’t generated for procedural geometry in RealityKit: https://codingxr.com/articles/shadows-lights-in-realitykit/ Has this been fixed? My projects tend to involve a lot of procedurally-generated meshes as opposed to importes-models. This will be even more important when VisionOS is out. On a similar note, it used to be that ground shadows were not per-entity. I’d like to enable or disable them per-entity. Is it possible? Since currently the only way to use passthrough AR in Vision OS will be to use RealityKit, more flexibility will be required. I can‘t simply apply my own preferences.
Replies
16
Boosts
1
Views
3k
Activity
Jun ’23
Apple Pencil Pro Squeeze API
I wonder if an Apple engineer could confirm: will the Apple Pencil Pro squeeze functionality be detectable in the current API, or will this be a future iPadOS extension to gesture recognizers / UIKit? I’d like to start playing with the functionality if it’s detected behind an existing event though. (Long press?)
Replies
1
Boosts
0
Views
1.2k
Activity
May ’24
Researcher in Spatial Computing / HCI Looking to Use Enterprise APIs on Vision Pro for HCI Research-Only.
I am a spatial computing / XR and Human-Computer Interaction researcher from a private university. I am interested in using the vision pro's newly-exposed camera access to develop and evaluate new algorithms for computational perception. ( WWDC session here: https://developer.apple.com/wwdc24/10139 ) I understand this is targeted at large enterprises, but I would like to know if by some means as a researcher affiliated with an educational institution I could develop private for-development-only applications for the vision pro with the enterprise APIs enabled. The intent is not to publish apps, but rather to contribute to the research community through R&D. However, to my knowledge, I would be ineligible as a normal "business" as I do not employee 100+ employees. I am an independent researcher, and on occasion, I collaborate within small research groups within my university that focus on this kind of camera-based perception algorithm development. Could someone from Apple comment? Thank you.
Replies
10
Boosts
1
Views
2.1k
Activity
Jun ’24
Sample Project for WWDC24 10092 Metal with Passthrough?
It’s great that we’ll be able to use Metal custom renderers in passthrough mode on visionOS. https://developer.apple.com/wwdc24/10092 This is a lot of complicated set-up, however. It’s also unclear how occlusion and custom algorithms / raytracing will work in tandem with scene understanding. May we have a project template and/or sample? Preferably with the C api and not just swift. This would be much-appreciated and helpful to everyone who wants this set-up. I’d like to see the whole process. Thank you for introducing this feature!
Replies
3
Boosts
1
Views
1.2k
Activity
Nov ’24
WKWebView 120hz Support
I'm developing an application that needs smooth framerates within a wkwebview that interacts with native code. However, requestAnimationFrame by default is still throttled to 60hz even if all my target devices (the iPad Pro for example) have supported 120hz for a long time already. I noticed that the latest Safari in 18.3 beta supports unlocked framerates, but that's only under Safari feature flags. To my knowledge, these flags do not apply to WKWebView. Is there a way to enable unlocked framerate in WKWebView via requestAnimationFrame? (Calling JS at a faster rate from the native code side will not work, almost definitely, since WKWebView will still render at its own rate.) This is an experimental application for internal use and I'm okay if there are temporary beta solutions available.
Replies
2
Boosts
1
Views
824
Activity
Jan ’25
Vision Pro Hand Tracking Availability in AR? Clarification?
Is full hand tracking on the Vision Pro available in passthrough AR (fully immersed with one application running), or only in fully immersive VR (no passthrough)?
Replies
0
Boosts
2
Views
636
Activity
Jun ’23
RealityKit Object Masking / Providing Depth Map for Effects - Use Case Feature Request
In regular Metal, I can do all sorts of tricks with texture masking to create composite objects and effects, similar to CSG. Since for now, AR-mode in visionOS requires RealityKit without the ability to use custom shaders, I'm a bit stuck. I'm pretty sure so far that what I want is impossible and requires a feature request, but here it goes: Here's a 2D example: Say I have some fake circular flashlights shining into the scene, depthwise, and everything else is black except for some rectangles that are "lit" by the circles. The result: How it works: In Metal, my per-instance data contain a texture index for a mask texture. The mask texture has an alpha of 0 for spots where the instance should not be visible, and an alpha of 1 otherwise. So in an initial renderpass, I draw the circular lights to this mask texture. In pass 2, I attach the fullscreen mask texture (circular lights) to all mesh instances that I want hidden in the darkness. A custom fragment shader multiplies the alpha of the full-screen mask texture sample at the given fragment with the color that would otherwise be output. i.e. out_color *= mask.a. The way I have blending and clear colors set-up, wherever the mask alpha is 0, an object will be hidden. The background clear color is black. The following is how the scene looks if I don't attach the masking texture. You can see that behind the scenes, the full rectangle is there. In visionOS AR-mode, the point is for the system to apply lighting, depth, and occlusion information to the world. For my effect to work, I need to be able to generate an intermediate representation of my world (after pass 2) that shows some of that world in darkness. I know I can use Metal separately from RealityKit to prepare a texture to apply to a RealityKit mesh using DrawableQueue However, as far as I know there is no way to supply a full-screen depth buffer for RealityKit to mix with whatever it's doing with the AR passthrough depth and occlusion behind the scenes. So my Metal texture would just be a flat quad in the scene rather than something mixed with the world. Furthermore, I don't see a way to apply a full-screen quad to the scene, period. I think my use case is impossible in visionOS in AR mode without customizable rendering in Metal (separate issue: I still think in single full app mode, it should be possible to grant access to the camera and custom rendering more securely) and/or a RealityKit feature enabling mixing of depth and occlusion textures for compositing. I love these sorts of masking/texture effects because they're simple and elegant to pull-off, and I can imagine creating several useful and fun experiences using this masking and custom depth info with AR passthrough. Please advise on how I could achieve this effect in the meantime. However, I'll go ahead and say a specific feature request is the ability to provide full-screen depth and occlusion textures to RealityKit so it's easier to mix Metal rendering as a pre-pass with RealityKit as a final composition step.
Replies
0
Boosts
2
Views
1.3k
Activity
Jun ’23