I notice new C++ 23 features such as the multi subscript operator overload mentioned in Xcode beta release notes, but I don’t see a way to enable C++ 23 in the build flags. What is the correct flag, or is C++ 23 unusable in Apple Clang?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I thiught that RealityKit’s CustomMaterial didn’t exist in visionOS, but it‘s listed here: https://developer.apple.com/documentation/realitykit/custommaterial
Can it in fact be used in mixed / ar passthrough mode and something changed?
What is the situation?
Related to “what you can do in visionOS,” what are all of these camera-related functionalities for? (As of yet, not described in the documentation)
https://developer.apple.com/documentation/realitykit/realityrenderer/cameraoutput/colortextures
https://developer.apple.com/documentation/realitykit/realityrenderer/cameraoutput/relativeviewport
What are the intended use cases? Is this the equivalent to render-to-texture? I also see some interop with raw Metal happening here.
In full immersive (VR) mode on visionOS, if I want to use compositor services and a custom Metal renderer, can I still get the user’s hands texture so my hands appear as they are in reality? If so, how?
If not, is this a valid feature request in the short term? It’s purely for aesthetic reasons. I’d like to see my own hands, even in immersive mode.
For the MaterialX shadergraph, the given example hard-codes two textures for blending at runtime ( https://developer.apple.com/documentation/visionos/designing-realitykit-content-with-reality-composer-pro#Build-materials-in-Shader-Graph )
Can I instead generate textures at runtime and set what those textures are as dynamic inputs for the material, or must all used textures be known when the material is created? If the procedural texture-setting is possible, how is it done, since the example shows a material with those hard-coded textures?
EDIT: It looks like the answer is ”yes” since setParameter accepts textureResources https://developer.apple.com/documentation/realitykit/materialparameters/value/textureresource(_:)?changes=l_7
However, how do you turn a MTLTexture into a TextureResource?
Topic:
Graphics & Games
SubTopic:
General
Tags:
AR / VR
visionOS
Reality Composer Pro
Shader Graph Editor
I’m still a little unsure about the various spaces and capabilities. I’d like to make full use of hand tracking, joints and all. In the mode with passthrough and a single application present (not a shared space), is that available? (I am pretty sure that the answer is “yes,” but I’d like to confirm.) What is this mode called in the system? Mixed full-space?
Does the Vision Pro allow usb peripherals like cameras, microphones, or video feeds from an iPhone or iPad? Can I use AVFoundation to access external camera feeds or microphones? Note that I am not asking about the internal cameras, which I am aware are off-limits.
One use case is to support multiple viewing angles comparable to what we do with slide projectors. For example, draw using an iPad flat on your desk while wearing the Vision Pro in full passthrough mode. Simultaneously mirror the iPad’s screen on multiple walls in real-time at minimum latency (by thunderbolt connection), similar to how I can use Quicktime in macOS to mirror my iPad’s screen.
Does Video Toolbox’s compression session yield data I can decompress on a different device that doesn’t have Apple’s decompression? i.e. so I can network data to other devices that aren’t necessarily Apple?
or is the format proprietary rather than just regular h.264 (for example)?
If I can decompress without video toolbox, may I have reference to some examples for how to do this using cross-platform APIs? Maybe FFMPEG has something?
I wonder if an Apple engineer could confirm: will the Apple Pencil Pro squeeze functionality be detectable in the current API, or will this be a future iPadOS extension to gesture recognizers / UIKit? I’d like to start playing with the functionality if it’s detected behind an existing event though. (Long press?)
I've been upgrading Xcode consistently for years and have never seen Metal shaders behave differently from one version to another until now.
On macOS 14.5, Xcode 16 beta, suddenly several color outputs turn out completely black where there should be color. All validation is on and nothing seems to be wrong (and hasn't been since maybe Xcode version 11).
I've attached two screens. The first is the normal color scheme, the second is in Xcode 16. The settings are the exact same.
Normal:
Buggy with black + transparent colors (so it seems like either colors are overflowing or are all 0s)?
Before I file a bug report or code level request, may I have some thoughts on how to debug this? The only clue I have is that I'm using bindless to multiply color texture samples with color values from my vertex struct. But it still fails even if I use hard-coded values for the texture samples, meaning somehow the color values are not being sent to the shader correctly? This is the most stable part of my rendering pipeline, so I'm surprised if the issue is there.
Thank you.
I'm developing an application that needs smooth framerates within a wkwebview that interacts with native code. However, requestAnimationFrame by default is still throttled to 60hz even if all my target devices (the iPad Pro for example) have supported 120hz for a long time already. I noticed that the latest Safari in 18.3 beta supports unlocked framerates, but that's only under Safari feature flags. To my knowledge, these flags do not apply to WKWebView. Is there a way to enable unlocked framerate in WKWebView via requestAnimationFrame? (Calling JS at a faster rate from the native code side will not work, almost definitely, since WKWebView will still render at its own rate.)
This is an experimental application for internal use and I'm okay if there are temporary beta solutions available.
For many years, I've noticed that although in native code I can handle continuous and simultaneous Apple pencil and touch inputs using UIKit, Safari and WKWebView's PointerEvents only seem to allow you to use one input type at a time. i.e. Apple Pencil down blocks touch input until lifted and touch input blocks Apple Pencil input. It's as though requiresexclusivetouchtype has been set in the underlying webkit implementation. There's decades of research (e.g. https://dl.acm.org/doi/10.1145/1866029.1866036 ) and several existing native applications in production showing that multimodal inputs open-up many unique and useful applications and interactions. Even a simple "hold object with finger" + "draw with stylus" controls are the norm. I recently built a native application using multimodal simultaneous inputs, but this is impossible to port to web due to the unexpected behavior of PointerEvents (and touch events, and mouse events; any variant exhibits the same behavior). I've researched and attempted to apply every possible flag, change, and css code to get this working, but I think the behind-the-scenes implementation is what's blocking the simultaneous touch types.
This is unexpected and undesired behavior because it's inconsistent with the native behavior. If it's unintended, it's a big priority to fix for creating better user experiences on the iPad. If it's intended, I do not believe that's reasonable (even if it might be more complex and used for more advanced applications). Please expose a way to support simultaneous touch types in iPadOS/iOS in both Safari and WKWebView.
At minimum, may we have a discussion on how to support the desired behavior? The simplest solution I can think of is to provide a webkit-platform-specific boolean in Safari and WKWebView called requiresExclusiveTouchType, which is set to False by default to keep the current behavior, and settable to True to get the more flexible behavior I'm expecting.
A lab I very much wanted to attend this Friday was assigned at the absolute worst time that conflicts with a critical business meeting. It was a gamble. Is there any possibility to reschedule within a more specific time range?
Thank you for your time.