Post

Replies

Boosts

Views

Activity

Ground Shadows for in-Program-Generated Meshes in RealityKit
Apparenrly, shadows aren’t generated for procedural geometry in RealityKit: https://codingxr.com/articles/shadows-lights-in-realitykit/ Has this been fixed? My projects tend to involve a lot of procedurally-generated meshes as opposed to importes-models. This will be even more important when VisionOS is out. On a similar note, it used to be that ground shadows were not per-entity. I’d like to enable or disable them per-entity. Is it possible? Since currently the only way to use passthrough AR in Vision OS will be to use RealityKit, more flexibility will be required. I can‘t simply apply my own preferences.
16
1
3.0k
Jun ’23
structured logging is great, but logging fails with variables ?
I wanted to try structured logging with os_log in C++, but I found that it fails to print anything given a format string and a variable: eg. void example(std::string& str) { os_log_info(OS_LOG_DEFAULT, "%s", str.c_str()); os_log_debug(OS_LOG_DEFAULT, "%s", str.c_str()); os_log_error(OS_LOG_DEFAULT, "%s", str.c_str()); } This prints a blank row in the console, but with no text. How is this meant to work with variables? It only works with literals and constants now as far as I can tell. I'm looking forward to getting this working.
10
0
3k
Jun ’23
Researcher in Spatial Computing / HCI Looking to Use Enterprise APIs on Vision Pro for HCI Research-Only.
I am a spatial computing / XR and Human-Computer Interaction researcher from a private university. I am interested in using the vision pro's newly-exposed camera access to develop and evaluate new algorithms for computational perception. ( WWDC session here: https://developer.apple.com/wwdc24/10139 ) I understand this is targeted at large enterprises, but I would like to know if by some means as a researcher affiliated with an educational institution I could develop private for-development-only applications for the vision pro with the enterprise APIs enabled. The intent is not to publish apps, but rather to contribute to the research community through R&D. However, to my knowledge, I would be ineligible as a normal "business" as I do not employee 100+ employees. I am an independent researcher, and on occasion, I collaborate within small research groups within my university that focus on this kind of camera-based perception algorithm development. Could someone from Apple comment? Thank you.
10
1
1.9k
Jun ’24
SFSpeechRecognizer Broken in iPadOS 15.0?
I updated Xcode to Xcode 13 and iPadOS to 15.0. Now my previously working application using SFSpeechRecognizer fails to start, regardless of whether I'm using on device mode or not. I use the delegate approach, and it looks like although the plist is set-up correctly (the authorization is successful and I get the orange circle indicating the microphone is on), the delegate method speechRecognitionTask(_:didFinishSuccessfully:) always returns false, but there is no particular error message to go along with this. I also downloaded the official example from Apple's documentation pages: SpokenWord SFSpeechRecognition example project page Unfortunately, it also does not work anymore. I'm working on a time-sensitive project and don't know where to go from here. How can we troubleshoot this? If it's an issue with Apple's API update or something has changed in the initial setup, I really need to know as soon as possible. Thanks.
9
1
4.1k
Apr ’22
WWDC 25 RemoteImmersiveSpace - Support for Passthrough Mode? RealityKit?
This is related to the WWDC presentation, What's new in Metal rendering for immersive apps.. Specifically, the macOS spatial streaming to visionOS feature: For reference: the page in the docs. The presentation demonstrates it using a full immersive space and Metal rendering using compositor services. I'd like clarity on a few things: Is the remote device wireless, or must the visionOS device be connected via a wired connected? Is there a limit to the number of remote devices, and if not, could macOS render different things per remote device simultaneously? Can I also use mixed mode with passthrough enabled, instead of just a fully-immersive mode? Can I use RealityKit instead of Metal? If so, may I have an example, or would someone point to an example?
5
0
550
Sep ’25
People Occlusion + Scene Understanding on VisionOS
In ARKit for iPad, I could 1) build a mesh on top of the real world and 2) request a people occlusion map for use with my application so people couls move behind or in fromt of virtual content via compositing. However, in VisionOS, there is no ARFrame image to pass to the function that would generate the occlusion data. Is it possible to do people occlusion in visionOS? If so, how it is done—through a data provider, or is it automatic when passthrough is enabled? If it’s not possible, is this something that might have a solution in future updates as the platform develops? Being able to combine virtual content and the real world with people being able to interact with the content convincingly is a really important aspect to AR, so it would make sense for this to be possible.
4
1
1.9k
Jun ’23
Sample Code for visionOS Metal?
There is a project tutorial for visionOS Metal rendering in immersive mode here (https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal?language=objc), but there is no downloadable sample project. Would Apple please provide sample code? The set-up is non-trivial.
4
1
2.8k
Sep ’23
JavaScript Core Optimization on Mobile?
Years ago, JSCore on non-macOS disabled JIT, leading to much worse performance than could possibly be achieved with JIT on. Has anything changed recently to permit greater optimizations for JSCore on mobile platforms? (iPadOS, visionOS). My guess is ”no” since the docs still llist only macOS under the MAP_JIT flag, but as far as I know, Apple could still choose to enable JSCore optimizations behind the scenes if this option were available to developers.
4
0
1.6k
Feb ’24
OSLog Request: split long output into structured and stdio instead of dropping long printouts entirely?
OSLog’s structured logging is nice, but the output length is limited compared with stdio’s. Currently, it looks like if I expect long variable-length print-outs, I’m forced to revert to using stdio. —or is this just an Xcode 15 beta 2 bug (discussed in the release notes) and fixed versions will match what stdio gives me? If not, could there be a way to configure oslog to fall-back to stdio dynamicallty based on whether the printout is too long or not? A custom fallback buffer allocator? Alternatively, what if I could still get the structured logging with the metadata, and use stdio for the rest of the message that doesn’t fit? That would be a nice way to guarantee that you get structured logging info without dropping the entire message.
3
0
1.3k
Jul ’23
When using ARKit, why can’t you get the front-facing and back-facing camera feeds at once?
I’d like to use ARKit world tracking and display both the back camera feed and the front camera feeds, using the front feed as as a PIP. This would work great for an internet streaming use case. However, it’s impossible. As soon as ARKit is told to use one mode, the camera for the other side freezes/doesn’t work. This page also says you have to pick one camera to show: https://developer.apple.com/documentation/arkit/arkit_in_ios/choosing_which_camera_feed_to_augment?language=objc A question to the developers: why is this limitation in-place? Are there any work-arounds for the use case of ARKit world tracking + displaying the back camera feed + displaying the front camera feed as an overlay? It’s possible to do this with plain camera initialization without ARKit. (There’s an official example.) With ARKit, it no longer works. It’s strange that I cannot access the front feed via one of the other frameworks, but I guess that ARKit blocks that.
3
0
1.1k
Dec ’24