Is full hand tracking on the Vision Pro available in passthrough AR (fully immersed with one application running), or only in fully immersive VR (no passthrough)?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Apparenrly, shadows aren’t generated for procedural geometry in RealityKit:
https://codingxr.com/articles/shadows-lights-in-realitykit/
Has this been fixed? My projects tend to involve a lot of procedurally-generated meshes as opposed to importes-models. This will be even more important when VisionOS is out.
On a similar note, it used to be that ground shadows were not per-entity. I’d like to enable or disable them per-entity. Is it possible?
Since currently the only way to use passthrough AR in Vision OS will be to use RealityKit, more flexibility will be required. I can‘t simply apply my own preferences.
On VisionOS, is a combination of full passthrough, unbounded volumes, and my own custom 3D rendering in Metal Possible?
According to the RealityKit and Unity VisionOS talk, towards the end, it’s shown that an unbounded volume mode allows you to create full passthrough experiences with graphics rendering in 3D — essentially full 3D AR in which you fan move around the space. It’s also shown that you can get occlusion for the graphics. This is all great, however, I don’t want to use RealityKit or Unity in my case. I would like to be able to render to an unbounded volume using my own custom Metal renderer, and still get AR passthrough and the ability to walk around and composit virtual graphical content with the background. To reiterate, this is exactly what is shown in the video using Unity, but I’d like to use my own renderer instead of Unity or RealityKit.
This doesn’t require access to the video camera texture, which I know is unavailable.
Having the flexibility to create passthrough mode content in a custom renderwr is super important for making an AR experience in which I have control over rendering.
One use case I have in mind is: Wizard’s Chess. You see the real world and can walk around a room-size chessboard with virtual chess pieces mixed with the real world, and you can see the other player through passthrough as well. I’d like to render graphics on my living room couches using scene reconstruction mesg anchors, for example, to change the atmosphere.
The video already shows several nice use cases like being able to interact with a tabletop fantasy world with characters.
Is what I’m describing possible with Metal? Thanks!
EDIT: Also, if not volumes, then full spaces? I don’t need access to the camera images that are off-limits. I would just like passthrough + composition with 3D Metal content + full ARKit tracking and occlusion features.
In ARKit for iPad, I could 1) build a mesh on top of the real world and 2) request a people occlusion map for use with my application so people couls move behind or in fromt of virtual content via compositing. However, in VisionOS, there is no ARFrame image to pass to the function that would generate the occlusion data. Is it possible to do people occlusion in visionOS? If so, how it is done—through a data provider, or is it automatic when passthrough is enabled? If it’s not possible, is this something that might have a solution in future updates as the platform develops? Being able to combine virtual content and the real world with people being able to interact with the content convincingly is a really important aspect to AR, so it would make sense for this to be possible.
I wanted to try structured logging with os_log in C++, but I found that it fails to print anything given a format string and a variable:
eg.
void example(std::string& str)
{
os_log_info(OS_LOG_DEFAULT, "%s", str.c_str());
os_log_debug(OS_LOG_DEFAULT, "%s", str.c_str());
os_log_error(OS_LOG_DEFAULT, "%s", str.c_str());
}
This prints a blank row in the console, but with no text.
How is this meant to work with variables? It only works with literals and constants now as far as I can tell.
I'm looking forward to getting this working.
I notice new C++ 23 features such as the multi subscript operator overload mentioned in Xcode beta release notes, but I don’t see a way to enable C++ 23 in the build flags. What is the correct flag, or is C++ 23 unusable in Apple Clang?
Xcode 13.4 only provides an SDK for macOS 12.3 according to the release notes. Can I build to macOS 12.4 using the lower point version SDK? I would not want to update the OS if I could not build to it yet.
Thanks.
I am seeing that seemingly after the macOS 12.0.1 update, the macbook pro 16" 2021 is having widespread issues with the magsafe charger when shut-off, in which fast-charge causes the charger to loop the connection sound over and over without actually charging.
discussion pages:
https://forums.macrumors.com/threads/2021-macbook-pro-16-magsafe-light-flashing-amber-and-power-chime-repeating-during-charging-when-off.2319925/
https://www.reddit.com/r/macbookpro/comments/qi4i9w/macbook_pro_16_m1_pro_2021_magsafe_3_charge_issue/
https://www.reddit.com/r/macbookpro/comments/qic7t7/magsafe_charging_problem_2021_16_macbook_pro_read/
Most people suspect it's a firmware/OS issue. Is Apple aware of this / is it being worked-on?
Has anyone tried this with the latest 12.1 beta as well?
I updated Xcode to Xcode 13 and iPadOS to 15.0.
Now my previously working application using SFSpeechRecognizer fails to start, regardless of whether I'm using on device mode or not.
I use the delegate approach, and it looks like although the plist is set-up correctly (the authorization is successful and I get the orange circle indicating the microphone is on), the delegate method speechRecognitionTask(_:didFinishSuccessfully:) always returns false, but there is no particular error message to go along with this.
I also downloaded the official example from Apple's documentation pages:
SpokenWord SFSpeechRecognition example project page
Unfortunately, it also does not work anymore.
I'm working on a time-sensitive project and don't know where to go from here. How can we troubleshoot this? If it's an issue with Apple's API update or something has changed in the initial setup, I really need to know as soon as possible.
Thanks.
I’m very interested in trying to have an iOS and watchOS device pair communicate and want to know if it’s possible for the iOS device to get the direction to the watchOS device. (I cannot try this because I don’t have an Apple Watch yet.)
I’m looking at the documentation here and am not sure how to interpret the wording: nearby interaction docs
Nearby Interaction on iOS provides a peer device’s distance and direction, whereas Nearby Interaction on watchOS provides only a peer device's distance.
I’m not sure what is considered the peer.
Let’s assume I’m communicating over a custom server and not using an iOS companion app. Is the above saying that:
A: iOS will send watchOS the distance from the iOS device to the watchOS device and watchOS will send out its distance and direction to the iOS device? (i.e. NearbyConnevtivity on iOS receives distance and direction of any other device, regardless of whether it’s a phone or watch, but watchOS only gets distance.)
B: The watch receives distance and direction to the phone, and the phone receives only the distance to the watch.
C: the iOS device only gets distance to the watchOS device, and the watchOS device only gets distance to iOS device, period.
May I have clarification?
A secondary question is how often and how accurate the distance and directions are calculated and sent, but first things first.
I’m looking forward to a reply. That would help very much and inform my decision to develop for watchOS. I have some neat project ideas that require option A or B to be true.
Thanks for your time!
Topic:
App & System Services
SubTopic:
General
Tags:
WatchKit
watchOS
Nearby Interaction
wwdc21-10165
Is there a relationship between AVAudio’s time here: docs link
and the system uptime? docs link
I’d like to be able to convert between them, but I’m not sure how they're related, if at all.
Specifically, I'd like the hostTime in terms of systemUptime because several other APIs offer systemUptime timestamps.
Thank you.
Is it possible to feed ReplayKit with custom live stream data, e.g. a cvPixelBuffer created from a Metal Texture, and stream that to youtube? My use case is to give the broadcaster hidden UI manipulation controls that the stream audience cannot see. (Think of a DJ. No one gets to see all the DAW controls on the DJ's laptop and doesn't need to because that's not part of the experience.)
If it's possible, might anyone be able to help figure out the correct way to implement this? From what I can tell, ReplayKit doesn't let you send custom data, in which case, what else can be done?
I forgot to ask this during my lab session, but I noticed iPadOS is not listed under supported OSes under the GroupActivities documentation page.
iPadOS supports FaceTime, but is it that GroupActivies doesn't work on iPadOS? This would be a crying shame since one of the examples specifically involved drawing collaboratively. The iPad is the perfect device for that use case.
EDIT: Quick edit. Coordinate media experiences with Group Activities mentions iPadOS support, in which case the first page I linked might have a missing OS entry.