Dear App Review Team,
We are raising serious concerns regarding the ongoing review process for the app (App ID: 6737148404), which has once again been escalated to the App Review Board—just days after the previous issue was finally resolved.
We submitted a new build on June 18 containing a standard feature update. The app entered "In Review" promptly, but after 3 days with no communication, it has now been escalated to the board again, likely. This pattern is deeply troubling.
What’s particularly alarming is that it appears the current reviewer lacks the necessary context or understanding of the app, and instead of engaging with us or taking ownership of the review, they’ve defaulted to escalating it without providing any rationale or feedback. This repeated hand-off to the board without accountability or explanation is effectively grinding our release process to a halt.
Based on past experience, we suspect we are now waiting for another board meeting early next week—which means yet another week lost to silence and uncertainty.
This is not a scalable or sustainable process. This is one of the top performing apps on visionOS, and yet every iteration—regardless of how minor—is delayed by escalations and inaction. These delays are damaging to our business, destabilizing to our user experience, and increasingly eroding our trust in the App Review process.
We urgently request:
Immediate clarity on the current review status
A direct line of communication with someone accountable for the review
An explanation as to why nearly every update requires board involvement
We’ve complied fully with all App Store guidelines. All standard support channels have already been exhausted without resolution.
This cycle must change. We are ready and willing to work collaboratively, but we need responsiveness and consistency from Apple in return.
App ID: 6737148404
visionOS
RSS for tagDiscuss developing for spatial computing and Apple Vision Pro.
Posts under visionOS tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Apple, please provide access to face tracking blend shapes on vision os, just like you do on iOS.
You have the best eye and face tracking implementation on the market, please let us use it. There is a sizable audience who will buy the headset just for it.
I personally know multiple people who are not buying the headset simply because you locked those features out.
No raw camera access is needed, just abstracted blendshapes values. You will make the headset so much more useful if you do this simple thing.
Is it possible to position windows on the floor by changing some setting? Currently, they cannot be placed on the floor due to drag limitations.
Any way to extend the video recording time in Reality Composer Pro from 3:00 to any longer value, such as editing preferences in Terminal or other workaround?
Is there any way to use the strap and a USB-C cable as a live video stream input source that would mirror to Quicktime or some other video capture tool?
I am assuming there is no online documentation or user manual for the strap, but please correct me if I'm wrong. Thank you.
Looking at the superclass of CPImmersiveScene is surprisingly is a UIWindowScene.
Is this intentional? Should I treat it as a window scene, and provide an UIWindow for it?
Or is it only an implementation detail for managing the internal CPSceneLayerEventWindow and UITextEffectsWindow?
Hello all, I saw this interesting VisionOS app: https://apps.apple.com/us/app/splitscreen-multi-display/id6478007837
I was wondering if there was any documentation on the Swift APIs that were used to create this app.
A recent WWDC session "Learn about Apple Immersive Video technologies" showed a Apple Spatial Audio Format Panner plugin for Pro Tools. The presenter stated that it's available on a per-user license.
Where can users access this?
Hello,
I'm currently developing for visionOS using Xcode's latest beta version.
I have a question regarding Widget Previews for visionOS 26:
When I create a new Widget Extension target directly from a visionOS project, the generated code does not include the #Preview macro.
Following the documentation, I manually added the #Preview macro to a Widget created within a visionOS project, but Xcode then displays an error stating that "This platform does not support previewing widgets."
My interpretation is that Widget Previews are currently not supported for Widgets created specifically for visionOS in this beta version. Is this understanding correct? Or am I missing a specific way to implement previews for visionOS Widgets, or is there a particular project setting I might have overlooked?
Any clarification or guidance on this matter would be greatly appreciated.
Thank you.
Topic:
App & System Services
SubTopic:
Widgets & Live Activities
Tags:
Xcode Previews
WidgetKit
visionOS
How can I create a view that is like the one in the Photos app where you can select an item and it fills the window, then you can drag / swipe between items in that view? I have a working prototype, and it works for photos. But once I get to a video, the gesture no longer works.
I'm using a class with tangents to render on RealityKit for VisionOS but in Vision26 it cause a crash on App and there not documentation how implement cp_drawable_compute_projection I have tried a few options but without success. Could you help me to implement it ?
The part of code is:
return drawable.views.map { view in
let userViewpointMatrix = (simdDeviceAnchor * view.transform).inverse
let projectionMatrix = ProjectiveTransform3D(
leftTangent: Double(view.tangents[0]),
rightTangent: Double(view.tangents[1]),
topTangent: Double(view.tangents[2]),
bottomTangent: Double(view.tangents[3]),
nearZ: Double(drawable.depthRange.y),
farZ: Double(drawable.depthRange.x),
reverseZ: true
)
let screenSize = SIMD2(x: Int(view.textureMap.viewport.width),
y: Int(view.textureMap.viewport.height))
return ModelRendererViewportDescriptor(viewport: view.textureMap.viewport,
projectionMatrix: .init(projectionMatrix),
viewMatrix: userViewpointMatrix * translationMatrix * rotationMatrix * scalingMatrix * commonUpCalibration,
screenSize: screenSize)
}
Hello,
For GuessTogether source code, it seems like the code assumes that you're already in a FaceTime call before pressing the custom SharePlay button (labeled "Play Guess Together"). If not already on a FaceTime call, my Apple Vision Pro and the visionOS simulator both do nothing after throwing warnings. Is this intended behavior?
If so, how do I make it so that pressing the button can also initiate FaceTime calls? Is this allowed?
Thank you!
I'm running into a persistent visual issue while deploying a floral corridor scene to Apple Vision Pro using Unity 6.0 with URP and Metal. The issue only appears on the Vision Pro device — everything looks fine in the Unity Editor.
Issue Description
When the frame rate drops to around 60–70 FPS, noticeable distortion artifacts appear around the edges of foliage models. It seems like the background meshes (behind the plants) get warped and leak through the edges of the foliage. Although this is most visible around the leaves, even solid objects like standard URP wall or box models show distorted edges when the issue occurs.
All the foliage uses Opaque or Alpha Clipping materials.
Things I've Tried
Changing the foliage materials to Transparent mode —distortion around edges disappears, but using Transparent for a large number of foliage assets is not ideal for performance or sorting complexity.
Reducing the number of foliage objects — with only a few plants in the scene and the frame rate staying around 100 FPS, the distortion disappears. However, this isn’t a practical solution for a full environment.
Possible Cause?
I came across this note in the Unity documentation:
"Ensure depth-buffer for each pixel is non-zero - on visionOS, the depth buffer is used for reprojection. To ensure visual effects like skyboxes and shaders are displayed beautifully, ensure that some value is written to the depth for each pixel."
Could this be related to the issue? Is it possible that Alpha Clipping with low pixel coverage leads to some pixels not writing to the depth buffer, which then causes problems during Vision Pro’s reprojection or foveated rendering? However, even when I disable Alpha Clipping entirely, the distortion issue still persists, so it may not be solely caused by clipping itself.
Project Setup
Unity 6.0 (URP)
Depth Texture: Enable
Using Metal as the graphics backend
Running on real Vision Pro hardware (not simulator)
Any advice on how to avoid these distortion issues on Vision Pro would be greatly appreciated.
Thanks!
Does anyone have a template of an Apple Projected Media Profile Format Description or a File of a Stereo wideFOV video?
Use case I have 2 compatible cameras that I stereo sync and I want to move the projection information from the compatible video to the Spatial video that combines them.
Every version I can come up with crashes the AVP and when viewing as Spatial in Tahoe I just get a black screen.
I am trying to run widgets on visionOS 26. Specifically I am trying to pin them to the simulator room's walls, however I am unable to do so.
Is this a limitation with the visionOS simulator right now, or am I missing a trick here?
Can I apply .scrollInputBehavior(.enabled, for: .look) to a WebView (wrapped UIViewRepresentable) in a visionOS 26 app?
I tried it myself, but I couldn't do it, so I would like to know if there is any way to do this.
Best regards.
Had anyone experienced convexCast causing a crash and what might be behind it?
Here's the call stack:
With the new ImagePresentationComponent in visionOS 26, how can text/overlays be shown on top of the image as seen in the Spatial Gallery app?
Hi ~
May I know how long did you get your VisionOS developer kit?
Thanks
Hi Nathaniel,
I spoke with you yesterday in the WWDC lab. Thanks for chatting with me! Is it possible to get a link to a doc that has some key metrics I'd find in a RealityKit trace so I know if that metric is exceeding limits and probably causing a problem? Right now, I just see numbers and have no idea if a metric is high or low :). This is specifically for a VisionOS app.
Thanks,
Bob
I want the effect of the model to be similar to the HoverEffect effect, but not by staring with the eyes. Instead, by clicking a button elsewhere, the corresponding model will appear highlighted,How can it be achieved