Hello,
Let me ask you a question about Apple Immersive Video.
https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/
I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format.
First, I would like to know if it is possible to integrate Apple Immersive Video into an app.
Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app?
It would be great if you could also share any helpful website resources.
I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos.
As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements.
I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed?
I’ve asked several questions, and that’s all for now. Thank you in advance!
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Volumes allow an app to display 3D content in defined bounds, sharing the space with other apps
What does it mean to be able to share space in Volumes? What are the benefits of being able to do this?
Do you mean Shared Space?
I don't understand Shared Space very well to begin with.
they can be viewed from different angles.
Does this mean that because it is 3D content with depth, if I change the angle, I can see it with depth?
It seems obvious to me because it is 3D content.
Is this related to Volumes?
Hi,
I have a question about Immersion Style. It is about progressive.
I understand that by specifying progressive in Immersion, it is possible to mix mixed and full, but when is this used, for example, as in the WWDC23 movie where the person watching the movie on the screen gradually switches the room to space, or in the Digital Crown where the person is watching a movie on the screen and the room gradually changes to space? For example, when a person is watching a movie on the screen and the room gradually changes to space, as in the WWDC23 movie, or when the room gets darker and darker as the Digital Crown is adjusted, or when the room goes completely dark?
Please let me know if you have a video, sample code, or explanation that shows an example of progression.
By the way, is it possible to get the event of operating the Digital Crown from the application?
Thanks.
Sadao Tokuyama
https://1planet.co.jp/
https://twitter.com/tokufxug
Hi
Scene Phases, but no event is issued when Alert is executed. Is this a known bug?
https://developer.apple.com/videos/play/wwdc2023/10111/?time=784
In the following video, the center value is obtained, but a compile error occurs because the center is not found.
https://developer.apple.com/videos/play/wwdc2023/10111/?time=861
GeometryReader3D { proxy in
ZStack {
Earth(
earthConfiguration: model.solarEarth,
satelliteConfiguration: [model.solarSatellite],
moonConfiguration: model.solarMoon,
showSun: true,
sunAngle: model.solarSunAngle,
animateUpdates: animateUpdates
)
.onTapGesture {
if let translation = proxy.transform(in: .immersiveSpace)?.translation {
model.solarEarth.position = Point3D(translation)
}
}
}
}
}
Also, model.solarEarth.position is Point3D. This is not a simple Entity, is it? I'm quite confused because the whole code is fragmented and I'm not even sure if it works. I'm not even sure if it's a bug or not, so it's taking me a few days to a week to investigate and verify.
Hi,
I am currently developing a Full Space App. I have a question about how to implement the display of Entity or Model Entity in front of the user. I want to move the Entity or Model Entity to the user's front, not only at the initial display, but also when the user takes an action such as tapping. (Animation is not required.) I want to perform the initial placement process to the user's front when the reset button is tapped.
Thanks.
Sadao Tokuyama
https://twitter.com/tokufxug
https://www.linkedin.com/in/sadao-tokuyama/
https://1planet.co.jp/tech-blog/category/applevisionpro
Feedback has been received, but when viewing Model3D, Entity, Model Entity, and AR Quick Look in the latest visionOS simulator, they appear dimmed.
https://feedbackassistant.apple.com/feedback/13235272
image:
https://ibb.co/GVLBKv7
Hello, I'm here.
I am posting this in the hope that you can give me some advice on what I would like to achieve.
What I would like to achieve is to download the USDZ 3D model from the web server within the visionOS app and display it with the Shared Space volume (volumetric) size set to fit the downloaded USDZ model.
Currently, after downloading USDZ and generating it as a Model Entity,
Using openWindow,
The Model Entity is created as a volumetric WindowGroup in the RealityViewContent of the RealityView using openWindow. The Model Entity generated by downloading USDZ is added to the RealityViewContent of the RealityView in the View called by openWindow.
The USDZ downloaded by the above method appears in the volume on visionOS without any problems. However, the size of the USDZ model to be downloaded is not uniform, so it may not fit in the volume.
I am trying to generate a WindowGroup with openWindow using Binding with the appropriate size value set to defaultSize, but I am not sure which property of ModelEntity can be set to the appropriate value for defaultSize.
The attached image does not have the correct position and I would like to place the position down if possible.
I would appreciate your advice on sizing and positioning the downloaded USDZ to fit in the volume. Incidentally, I tried a plane style window and found that it displayed a USDZ Model Entity that was much larger in scale compared to the volume, so I have decided not to support a plane style window.
If there is any information on how to properly set the position and size of the USDZ files created by visionOS and RealityKit, I would appreciate it if you could also provide it.
Best regards.
Sadao Tokuyama
https://twitter.com/tokufxug
https://1planet.co.jp/tech-blog/category/applevisionpro
Does the defaultSize of WindowGroup have a minimum and maximum value? If so, could you please tell me what they are?
If I put an alpha image texture on a model created in Blender and run it on
RCP or visionOS, the rendering between the front and back due to alpha will result in an unintended rendering. Details are below.
I expor ted a USDC file of a Blender-created cylindrical object wit h a PNG (wit h alpha) texture applied to t he inside, and
t hen impor ted it into Reality Composer Pro.
When multiple objects t hat make extensive use of transparent textures are placed in front of and behind each ot her,
t he following behaviors were obser ved in t he transparent areas
・The transparent areas do not become transparent
・The transparent areas become transparent toget her wit h t he image behind t hem
the order of t he images becomes incorrect
Best regards.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
USDZ
RealityKit
Reality Composer Pro
visionOS
Hi,
I have a question.
In visionOS, when a user looks at a button and performs a pinch gesture with their index finger and thumb, the button responds. By default, this works with both the left and right hands. However, I want to disable the pinch gesture when performed with the left hand while keeping it functional with the right hand.
I understand that the system settings allow users to configure input for both hands, the left hand only, or the right hand only. However, I would like to control this behavior within the app itself.
Is this possible?
Best regards.
I have a question about Apple’s preinstalled visionOS app “Encounter Dinosaurs.”
In this app, the dinosaurs are displayed over the real-world background, but the PhysicallyBasedMaterial (PBM) in RealityKit doesn’t appear to respond to the actual brightness of the environment.
Even when I change the lighting in the room, the dinosaurs’ brightness and shading remain almost the same.
If this behavior is intentional — for example, if the app disables real-world lighting influence or uses a fixed lighting setup — could someone explain how and why it’s implemented that way?
I’m currently developing a visionOS app that includes an RCP scene with a large USDZ file (around 2GB).
Each time I make adjustments to the CG model in Blender, I export it as USDZ again, place it in the RCP scene, and then build the app using Xcode.
However, because the USDZ file is quite large, the build process takes a long time, significantly slowing down my development speed.
For example, I’d like to know if there are any effective ways to:
Improve overall build performance
Reduce the time between updating the USDZ file and completing the build
Any advice or best practices for optimizing this workflow would be greatly appreciated.
Best regards,
Sadao
I have two questions regarding releasing an app that uses an in-app browser (WKWebView) on the App Store worldwide.
Question 1: Encryption usage
Our app uses WKWebView and relies on standard encryption. Should this be declared as using encryption during the App Store submission?
Question 2: If the answer to Question 1 is YES
If it must be declared as using encryption, do we need to prepare and upload additional documentation when submitting the app in France?
Also, would this require us to redo the entire build and upload process, even for an app version that has already been uploaded?
Goal / request:
We want to release an app using WKWebView worldwide, including France. We would like to understand all the necessary steps and requirements for completing the App Store release without unexpected rework.
Best regards,
P.S.: A similar question was posted a few years ago, but it seems there was no response.
https://developer.apple.com/forums/thread/725047
Sadao
Hello,
I am currently considering developing a Full Space app that enables a shared visionOS experience with nearby users.
Intended Features
A Mixed Full Space app in which dozens of 3D models are placed in the space.
These 3D models may play embedded animations when tapped, be programmatically moved or rotated, or be controlled via Reality Composer Pro timelines.
The app also includes audio, spatial audio, videos with audio, and videos without audio, which are rendered as VideoTextures on planes and played back in the space.
Some media elements play automatically, while others are triggered by user interaction.
However, it is unclear whether AVPlaybackCoordinator supports shared playback across multiple types of media, such as:
audio only
spatial audio
video without audio
video with audio
I am also unsure whether there are alternative or recommended approaches for synchronizing playback in this scenario.
Questions
Is it technically possible to implement the experience described above using visionOS?
Are there any important implementation considerations or limitations that should be taken into account?
For example, when two participants experience the app simultaneously, how is the content positioned for each participant?
Is the spatial placement of content shared across participants, or is it positioned relative to each participant’s viewpoint?
For nearby participants, is it necessary to register a spatial Persona? My understanding is that spatial Personas are not visible for nearby users during the experience; is this correct?
When experiencing SharePlay with nearby users, is it possible to share the experience without registering the other participant’s contact information?
I have watched the following session, but I was unable to fully understand the feasibility of the above use case or the concrete implementation details:
https://developer.apple.com/videos/play/wwdc2025/318/
Thank you.
Hi,
I have one question.
When creating a web page, is there a way to determine that it is being accessed from Safari on visionOS? I would also like to know the user agent for Safari on visionOS. If there is more than one way to determine this, such as JavaScript and web server, please tell us all. Cases where it is used include changing the page layout in the case of Safari on visionOS, changing the processing method when dynamically generating HTML pages on a web server, and judging Quick Look.
Best regards.
Sadao Tokuyama
https://1planet.co.jp/
https://twitter.com/tokufxug