Hi,
I am currently developing a Full Space App. I have a question about how to implement the display of Entity or Model Entity in front of the user. I want to move the Entity or Model Entity to the user's front, not only at the initial display, but also when the user takes an action such as tapping. (Animation is not required.) I want to perform the initial placement process to the user's front when the reset button is tapped.
Thanks.
Sadao Tokuyama
https://twitter.com/tokufxug
https://www.linkedin.com/in/sadao-tokuyama/
https://1planet.co.jp/tech-blog/category/applevisionpro
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Feedback has been received, but when viewing Model3D, Entity, Model Entity, and AR Quick Look in the latest visionOS simulator, they appear dimmed.
https://feedbackassistant.apple.com/feedback/13235272
image:
https://ibb.co/GVLBKv7
Hello, I'm here.
I am posting this in the hope that you can give me some advice on what I would like to achieve.
What I would like to achieve is to download the USDZ 3D model from the web server within the visionOS app and display it with the Shared Space volume (volumetric) size set to fit the downloaded USDZ model.
Currently, after downloading USDZ and generating it as a Model Entity,
Using openWindow,
The Model Entity is created as a volumetric WindowGroup in the RealityViewContent of the RealityView using openWindow. The Model Entity generated by downloading USDZ is added to the RealityViewContent of the RealityView in the View called by openWindow.
The USDZ downloaded by the above method appears in the volume on visionOS without any problems. However, the size of the USDZ model to be downloaded is not uniform, so it may not fit in the volume.
I am trying to generate a WindowGroup with openWindow using Binding with the appropriate size value set to defaultSize, but I am not sure which property of ModelEntity can be set to the appropriate value for defaultSize.
The attached image does not have the correct position and I would like to place the position down if possible.
I would appreciate your advice on sizing and positioning the downloaded USDZ to fit in the volume. Incidentally, I tried a plane style window and found that it displayed a USDZ Model Entity that was much larger in scale compared to the volume, so I have decided not to support a plane style window.
If there is any information on how to properly set the position and size of the USDZ files created by visionOS and RealityKit, I would appreciate it if you could also provide it.
Best regards.
Sadao Tokuyama
https://twitter.com/tokufxug
https://1planet.co.jp/tech-blog/category/applevisionpro
Does the defaultSize of WindowGroup have a minimum and maximum value? If so, could you please tell me what they are?
If I put an alpha image texture on a model created in Blender and run it on
RCP or visionOS, the rendering between the front and back due to alpha will result in an unintended rendering. Details are below.
I expor ted a USDC file of a Blender-created cylindrical object wit h a PNG (wit h alpha) texture applied to t he inside, and
t hen impor ted it into Reality Composer Pro.
When multiple objects t hat make extensive use of transparent textures are placed in front of and behind each ot her,
t he following behaviors were obser ved in t he transparent areas
・The transparent areas do not become transparent
・The transparent areas become transparent toget her wit h t he image behind t hem
the order of t he images becomes incorrect
Best regards.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
USDZ
RealityKit
Reality Composer Pro
visionOS
Hi,
I have a question.
In visionOS, when a user looks at a button and performs a pinch gesture with their index finger and thumb, the button responds. By default, this works with both the left and right hands. However, I want to disable the pinch gesture when performed with the left hand while keeping it functional with the right hand.
I understand that the system settings allow users to configure input for both hands, the left hand only, or the right hand only. However, I would like to control this behavior within the app itself.
Is this possible?
Best regards.
I have a question about Apple’s preinstalled visionOS app “Encounter Dinosaurs.”
In this app, the dinosaurs are displayed over the real-world background, but the PhysicallyBasedMaterial (PBM) in RealityKit doesn’t appear to respond to the actual brightness of the environment.
Even when I change the lighting in the room, the dinosaurs’ brightness and shading remain almost the same.
If this behavior is intentional — for example, if the app disables real-world lighting influence or uses a fixed lighting setup — could someone explain how and why it’s implemented that way?
I’m currently developing a visionOS app that includes an RCP scene with a large USDZ file (around 2GB).
Each time I make adjustments to the CG model in Blender, I export it as USDZ again, place it in the RCP scene, and then build the app using Xcode.
However, because the USDZ file is quite large, the build process takes a long time, significantly slowing down my development speed.
For example, I’d like to know if there are any effective ways to:
Improve overall build performance
Reduce the time between updating the USDZ file and completing the build
Any advice or best practices for optimizing this workflow would be greatly appreciated.
Best regards,
Sadao
I have two questions regarding releasing an app that uses an in-app browser (WKWebView) on the App Store worldwide.
Question 1: Encryption usage
Our app uses WKWebView and relies on standard encryption. Should this be declared as using encryption during the App Store submission?
Question 2: If the answer to Question 1 is YES
If it must be declared as using encryption, do we need to prepare and upload additional documentation when submitting the app in France?
Also, would this require us to redo the entire build and upload process, even for an app version that has already been uploaded?
Goal / request:
We want to release an app using WKWebView worldwide, including France. We would like to understand all the necessary steps and requirements for completing the App Store release without unexpected rework.
Best regards,
P.S.: A similar question was posted a few years ago, but it seems there was no response.
https://developer.apple.com/forums/thread/725047
Sadao
Hi
One point I would like to ask.
How many specs do you need and which Mac would you recommend for Apple Vision Pro development? We use XCode, RealityKit, ARKit, Reality Composer Pro, Unity Editor which supports visionOS development, and MaterialX.
If possible, what notebook and desktop models do you recommend?
Best regards
Sadao Tokuyama
https://1planet.co.jp/
Hi,
Will the Apple Vision Pro developer kit be sold only in the US? I live in Japan. Is it possible to get the Apple Vision Pro developer kit in Japan?
Best regards.
Sadao Tokuyama
https://1planet.co.jp/
https://twitter.com/tokufxug
Hi,
If you write the sample code for the gesture in the following document and try to execute it, an error will occur.
https://developer.apple.com/documentation/visionos/adding-3d-content-to-your-app
Value of type '(@escaping (TapGesture.Value) -> Void) -> _EndedGesture' has no member 'targetedToAnyEntity'
Is something missing?
XCode 15.0 beta 2 (15A516b)
visionOS 1.0 beta
import SwiftUI
import RealityKit
import RealityKitContent
struct SphereSecondView: View
{
@State var scale = false
/*var tap: some Gesture {
TapGesture()
.onEnded { _ in
print("Tap")
scale.toggle()
}
}*/
var body: some View {
RealityView {content in
let model = ModelEntity(
mesh: .generateSphere(radius: 0.1),
materials: [SimpleMaterial(color: .yellow, isMetallic: true)])
content.add(model)
} update: { content in
if let model = content.entities.first {
model.transform.scale = scale ? [2.0, 2.0, 2.0] : [1.0, 1.0, 1.0]
}
}
.gesture(TapGesture().onEnded
.targetedToAnyEntity() { _ in
scale.toggle()
})
}
}
#Preview{
SphereSecondView()
}
Best regards.
Sadao Tokuyama
https://1planet.co.jp/
https://twitter.com/tokufxug
Hi,
I have one question.
How do I issue MagnifyGesture's onChange event in the visionOS simulator? I have tried various operations, but the onChange event does not work.
https://developer.apple.com/videos/play/wwdc2023/10111/?time=994
@main
struct WorldApp: App {
@State private var currentStyle: ImmersionStyle = .mixed
var body: some Scene {
ImmersiveSpace(id: "solar") {
SolarSystem()
.simultaneousGesture(MagnifyGesture()
.onChanged { value in
let scale = value.magnification
if scale > 5 {
currentStyle = .progressive
} else if scale > 10 {
currentStyle = .full
} else {
currentStyle = .mixed
}
}
)
}
.immersionStyle(selection:$currentStyle, in: .mixed, .progressive, .full)
}
}
Thanks.
Hello,
Let me ask you a question about Apple Immersive Video.
https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/
I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format.
First, I would like to know if it is possible to integrate Apple Immersive Video into an app.
Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app?
It would be great if you could also share any helpful website resources.
I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos.
As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements.
I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed?
I’ve asked several questions, and that’s all for now. Thank you in advance!
The source code for visionOS's WWDC23 session, Take SwiftUI to the next dimension, suddenly makes extensive use of GestureState. However, there is no sample code that shows the full extent of GestureState, nor is there any explanation of its use in the video.
I cannot proceed with understanding unless you share information about this.
https://developer.apple.com/videos/play/wwdc2023/10113/?time=969
URL of the capture of the part of the video where GestureState is used. (An error occurred when uploading the image.)
https://imgur.com/a/ZAeWk2k
Sincerely,
Sadao Tokuyama
https://twitter.com/tokufxug
https://www.linkedin.com/in/sadao-tokuyama/