Post

Replies

Boosts

Views

Activity

runtime texture input for ShaderGraphMaterial?
For the MaterialX shadergraph, the given example hard-codes two textures for blending at runtime ( https://developer.apple.com/documentation/visionos/designing-realitykit-content-with-reality-composer-pro#Build-materials-in-Shader-Graph ) Can I instead generate textures at runtime and set what those textures are as dynamic inputs for the material, or must all used textures be known when the material is created? If the procedural texture-setting is possible, how is it done, since the example shows a material with those hard-coded textures? EDIT: It looks like the answer is ”yes” since setParameter accepts textureResources https://developer.apple.com/documentation/realitykit/materialparameters/value/textureresource(_:)?changes=l_7 However, how do you turn a MTLTexture into a TextureResource?
1
2
1.2k
Jul ’23
WWDC 25 RemoteImmersiveSpace - Support for Passthrough Mode? RealityKit?
This is related to the WWDC presentation, What's new in Metal rendering for immersive apps.. Specifically, the macOS spatial streaming to visionOS feature: For reference: the page in the docs. The presentation demonstrates it using a full immersive space and Metal rendering using compositor services. I'd like clarity on a few things: Is the remote device wireless, or must the visionOS device be connected via a wired connected? Is there a limit to the number of remote devices, and if not, could macOS render different things per remote device simultaneously? Can I also use mixed mode with passthrough enabled, instead of just a fully-immersive mode? Can I use RealityKit instead of Metal? If so, may I have an example, or would someone point to an example?
5
0
735
Sep ’25
macbook pro 16" 2021 magsafe issues macOS 12.0.1
I am seeing that seemingly after the macOS 12.0.1 update, the macbook pro 16" 2021 is having widespread issues with the magsafe charger when shut-off, in which fast-charge causes the charger to loop the connection sound over and over without actually charging. discussion pages: https://forums.macrumors.com/threads/2021-macbook-pro-16-magsafe-light-flashing-amber-and-power-chime-repeating-during-charging-when-off.2319925/ https://www.reddit.com/r/macbookpro/comments/qi4i9w/macbook_pro_16_m1_pro_2021_magsafe_3_charge_issue/ https://www.reddit.com/r/macbookpro/comments/qic7t7/magsafe_charging_problem_2021_16_macbook_pro_read/ Most people suspect it's a firmware/OS issue. Is Apple aware of this / is it being worked-on? Has anyone tried this with the latest 12.1 beta as well?
0
0
803
Nov ’21
People Occlusion + Scene Understanding on VisionOS
In ARKit for iPad, I could 1) build a mesh on top of the real world and 2) request a people occlusion map for use with my application so people couls move behind or in fromt of virtual content via compositing. However, in VisionOS, there is no ARFrame image to pass to the function that would generate the occlusion data. Is it possible to do people occlusion in visionOS? If so, how it is done—through a data provider, or is it automatic when passthrough is enabled? If it’s not possible, is this something that might have a solution in future updates as the platform develops? Being able to combine virtual content and the real world with people being able to interact with the content convincingly is a really important aspect to AR, so it would make sense for this to be possible.
4
1
2.0k
Jun ’23
Eye Tracking as a General API Feature for iPadOS 18+?
I'd like to use the eye tracking feature in the latest iPadOS 18 update as more than an accessibility feature. i.e. another input modality that can be detected by event + enum checks similar to how we can detect and distinguish between touches and Apple pencil inputs. This might make it a lot easier to control and interact with iPad-based AR experiences that involve walking around, regardless of whether eye-tracking is enabled for accessibility. When walking, it's challenging to hold the device and interact with the screen with touch or pencil at all. Eye tracking + speech as input modalities could assist here. Also, this would help us create non-immersive AR experiences that parallel visionOS experiences that use eye tracking. I propose an API option for enabling eye-tracking (and an optional calibration dialogue within-app), as well as a specific UIControl class that simply detect when the eye looks at the control using the standard (begin/changed/end) events. My specific use case is that I'd like to treat eye-tracking-enabled UI elements or game objects differently depending on whether something is looked at with the eyes. For example, to select game objects while using speech recognition, suppose we have 4 buttons with the same name in 4 corners of the screen. Call them "corner" buttons. If I have my proposed invisible UI element for gaze detection, I can create 4 large rectangular regions on the screen. Then if the user says "select the corner" the system could parse this command and disambiguate between the 4 corners by checking which of the rectangular regions I'm currently looking at. (Note: the idea would be to make the gaze regions rather large to compensate for error.) The above is just a simple example, but the advantage over other methods like dwell is that it could be a lot faster. Another simple example: Using the same rectangular regions, instead of speech input, I could hold a button placed in just one spot on the screen, and look around the screen with my gaze to produce a laser beam for some kind of game, or draw curves (that I might smooth-out to reduce inaccuracy). OR maybe someone who does not have their hands available. This would require us to have the ability to get the coordinates of the eye gaze, but otherwise the other approach of just opting to trigger uicontrol elements might work for coarse selection. Would other developers find this useful as well? I'd like to propose this feature in feedback assistant, but I'm also opening-up a little discussion if by chance someone sees this. In short, I propose: a formal eye-tracking API for iPadOS 18+ that allows for turning on/off the tracking within the app, with the necessary user permissions the API should produce begin/changed/ended events similar to the existing events in UIKit, including screen coordinates. There should be a way to identify that an event came from eye-tracking. alternatively, we should have at minimum an invisible UIControl subclass that can detect when the eyes enter/leave the region.
0
6
1.9k
Jun ’24
SFSpeechRecognizer Broken in iPadOS 15.0?
I updated Xcode to Xcode 13 and iPadOS to 15.0. Now my previously working application using SFSpeechRecognizer fails to start, regardless of whether I'm using on device mode or not. I use the delegate approach, and it looks like although the plist is set-up correctly (the authorization is successful and I get the orange circle indicating the microphone is on), the delegate method speechRecognitionTask(_:didFinishSuccessfully:) always returns false, but there is no particular error message to go along with this. I also downloaded the official example from Apple's documentation pages: SpokenWord SFSpeechRecognition example project page Unfortunately, it also does not work anymore. I'm working on a time-sensitive project and don't know where to go from here. How can we troubleshoot this? If it's an issue with Apple's API update or something has changed in the initial setup, I really need to know as soon as possible. Thanks.
9
1
4.2k
Apr ’22
Sample Code for visionOS Metal?
There is a project tutorial for visionOS Metal rendering in immersive mode here (https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal?language=objc), but there is no downloadable sample project. Would Apple please provide sample code? The set-up is non-trivial.
4
1
2.9k
Sep ’23
runtime texture input for ShaderGraphMaterial?
For the MaterialX shadergraph, the given example hard-codes two textures for blending at runtime ( https://developer.apple.com/documentation/visionos/designing-realitykit-content-with-reality-composer-pro#Build-materials-in-Shader-Graph ) Can I instead generate textures at runtime and set what those textures are as dynamic inputs for the material, or must all used textures be known when the material is created? If the procedural texture-setting is possible, how is it done, since the example shows a material with those hard-coded textures? EDIT: It looks like the answer is ”yes” since setParameter accepts textureResources https://developer.apple.com/documentation/realitykit/materialparameters/value/textureresource(_:)?changes=l_7 However, how do you turn a MTLTexture into a TextureResource?
Replies
1
Boosts
2
Views
1.2k
Activity
Jul ’23
Can Xcode 16 beta 4 build to iPadOS 18.1?
Does Xcode 16 beta 4 include the SDK for iPadOS 18.1 beta, or should I stick with iPadOS 18.0 beta 4/18.0 betas? I want to make sure that I can continue building to my device if I update it to 18.1 beta.
Replies
3
Boosts
2
Views
967
Activity
Aug ’24
WWDC 25 RemoteImmersiveSpace - Support for Passthrough Mode? RealityKit?
This is related to the WWDC presentation, What's new in Metal rendering for immersive apps.. Specifically, the macOS spatial streaming to visionOS feature: For reference: the page in the docs. The presentation demonstrates it using a full immersive space and Metal rendering using compositor services. I'd like clarity on a few things: Is the remote device wireless, or must the visionOS device be connected via a wired connected? Is there a limit to the number of remote devices, and if not, could macOS render different things per remote device simultaneously? Can I also use mixed mode with passthrough enabled, instead of just a fully-immersive mode? Can I use RealityKit instead of Metal? If so, may I have an example, or would someone point to an example?
Replies
5
Boosts
0
Views
735
Activity
Sep ’25
How to Convert a MTLTexture into a TextureResource?
What is the most efficient way to use a MTLTexture (created procedurally at run-time) as a RealityKit TextureResource? I update the MTLTexture per-frame using regular Metal rendering, so it’s not something I can do offline. Is there a way to wrap it without doing a copy? A specific example would be great. Thank you!
Replies
5
Boosts
3
Views
2.6k
Activity
Apr ’24
WebAR with visionOS 2.0?
visionOS 2.0 enables passthrough with Metal now: https://developer.apple.com/wwdc24/10092 This suggests it will be possible for WebXR’s AR passthrough module to be implemented for Safari. Is this already available, perhaps behind a flag?
Replies
5
Boosts
3
Views
1.8k
Activity
Jun ’24
macbook pro 16" 2021 magsafe issues macOS 12.0.1
I am seeing that seemingly after the macOS 12.0.1 update, the macbook pro 16" 2021 is having widespread issues with the magsafe charger when shut-off, in which fast-charge causes the charger to loop the connection sound over and over without actually charging. discussion pages: https://forums.macrumors.com/threads/2021-macbook-pro-16-magsafe-light-flashing-amber-and-power-chime-repeating-during-charging-when-off.2319925/ https://www.reddit.com/r/macbookpro/comments/qi4i9w/macbook_pro_16_m1_pro_2021_magsafe_3_charge_issue/ https://www.reddit.com/r/macbookpro/comments/qic7t7/magsafe_charging_problem_2021_16_macbook_pro_read/ Most people suspect it's a firmware/OS issue. Is Apple aware of this / is it being worked-on? Has anyone tried this with the latest 12.1 beta as well?
Replies
0
Boosts
0
Views
803
Activity
Nov ’21
People Occlusion + Scene Understanding on VisionOS
In ARKit for iPad, I could 1) build a mesh on top of the real world and 2) request a people occlusion map for use with my application so people couls move behind or in fromt of virtual content via compositing. However, in VisionOS, there is no ARFrame image to pass to the function that would generate the occlusion data. Is it possible to do people occlusion in visionOS? If so, how it is done—through a data provider, or is it automatic when passthrough is enabled? If it’s not possible, is this something that might have a solution in future updates as the platform develops? Being able to combine virtual content and the real world with people being able to interact with the content convincingly is a really important aspect to AR, so it would make sense for this to be possible.
Replies
4
Boosts
1
Views
2.0k
Activity
Jun ’23
Eye Tracking as a General API Feature for iPadOS 18+?
I'd like to use the eye tracking feature in the latest iPadOS 18 update as more than an accessibility feature. i.e. another input modality that can be detected by event + enum checks similar to how we can detect and distinguish between touches and Apple pencil inputs. This might make it a lot easier to control and interact with iPad-based AR experiences that involve walking around, regardless of whether eye-tracking is enabled for accessibility. When walking, it's challenging to hold the device and interact with the screen with touch or pencil at all. Eye tracking + speech as input modalities could assist here. Also, this would help us create non-immersive AR experiences that parallel visionOS experiences that use eye tracking. I propose an API option for enabling eye-tracking (and an optional calibration dialogue within-app), as well as a specific UIControl class that simply detect when the eye looks at the control using the standard (begin/changed/end) events. My specific use case is that I'd like to treat eye-tracking-enabled UI elements or game objects differently depending on whether something is looked at with the eyes. For example, to select game objects while using speech recognition, suppose we have 4 buttons with the same name in 4 corners of the screen. Call them "corner" buttons. If I have my proposed invisible UI element for gaze detection, I can create 4 large rectangular regions on the screen. Then if the user says "select the corner" the system could parse this command and disambiguate between the 4 corners by checking which of the rectangular regions I'm currently looking at. (Note: the idea would be to make the gaze regions rather large to compensate for error.) The above is just a simple example, but the advantage over other methods like dwell is that it could be a lot faster. Another simple example: Using the same rectangular regions, instead of speech input, I could hold a button placed in just one spot on the screen, and look around the screen with my gaze to produce a laser beam for some kind of game, or draw curves (that I might smooth-out to reduce inaccuracy). OR maybe someone who does not have their hands available. This would require us to have the ability to get the coordinates of the eye gaze, but otherwise the other approach of just opting to trigger uicontrol elements might work for coarse selection. Would other developers find this useful as well? I'd like to propose this feature in feedback assistant, but I'm also opening-up a little discussion if by chance someone sees this. In short, I propose: a formal eye-tracking API for iPadOS 18+ that allows for turning on/off the tracking within the app, with the necessary user permissions the API should produce begin/changed/ended events similar to the existing events in UIKit, including screen coordinates. There should be a way to identify that an event came from eye-tracking. alternatively, we should have at minimum an invisible UIControl subclass that can detect when the eyes enter/leave the region.
Replies
0
Boosts
6
Views
1.9k
Activity
Jun ’24
Example for New WWDC 2023 Metal Features?
Will a new example project be released to cover the new raytracing features in session WWDC2023-10128?
Replies
0
Boosts
2
Views
933
Activity
Jun ’23
CustomMaterial in RealityKit visionOS?
I thiught that RealityKit’s CustomMaterial didn’t exist in visionOS, but it‘s listed here: https://developer.apple.com/documentation/realitykit/custommaterial Can it in fact be used in mixed / ar passthrough mode and something changed? What is the situation?
Replies
1
Boosts
1
Views
1k
Activity
Jun ’23
On vision pro, what is the max walkable distance in unbounded passthrough mode?
In passthrough mode on the Vision Pro, what is the maximum distance one can walk from the origin while keeping stable tracking?
Replies
3
Boosts
2
Views
1.6k
Activity
Jul ’23
SFSpeechRecognizer Broken in iPadOS 15.0?
I updated Xcode to Xcode 13 and iPadOS to 15.0. Now my previously working application using SFSpeechRecognizer fails to start, regardless of whether I'm using on device mode or not. I use the delegate approach, and it looks like although the plist is set-up correctly (the authorization is successful and I get the orange circle indicating the microphone is on), the delegate method speechRecognitionTask(_:didFinishSuccessfully:) always returns false, but there is no particular error message to go along with this. I also downloaded the official example from Apple's documentation pages: SpokenWord SFSpeechRecognition example project page Unfortunately, it also does not work anymore. I'm working on a time-sensitive project and don't know where to go from here. How can we troubleshoot this? If it's an issue with Apple's API update or something has changed in the initial setup, I really need to know as soon as possible. Thanks.
Replies
9
Boosts
1
Views
4.2k
Activity
Apr ’22
Sample Code for visionOS Metal?
There is a project tutorial for visionOS Metal rendering in immersive mode here (https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal?language=objc), but there is no downloadable sample project. Would Apple please provide sample code? The set-up is non-trivial.
Replies
4
Boosts
1
Views
2.9k
Activity
Sep ’23