Hi
One point I would like to ask.
How many specs do you need and which Mac would you recommend for Apple Vision Pro development? We use XCode, RealityKit, ARKit, Reality Composer Pro, Unity Editor which supports visionOS development, and MaterialX.
If possible, what notebook and desktop models do you recommend?
Best regards
Sadao Tokuyama
https://1planet.co.jp/
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
I am currently watching the Create immersive Unity apps video from WWDC23. I am posting this question because a question arose while watching the video.
First, look at the following, which is explained in the session
Because you're using Unity to create volumetric content that participates in the shared space, a new concept called a volume camera lets you control how your scene is brought into the real world.
A volume camera can create two types of volumes, bounded and unbounded, each with different characteristics.
Your application can switch between the two at any time.
https://developer.apple.com/videos/play/wwdc2023/10088/?time=465
Your unbounded volume displays in a full space on this platform and allows your content to fully blend with passthrough for a more immersive experience.
https://developer.apple.com/videos/play/wwdc2023/10088/?time=568
At first, we explain that there are two types of volumetric content in Shared Space: bounded volume and unbounded volume.
However, when we get to the description of unbounded volume, it is changed to Full Space.
Is Full Space correct for unbounded volume, not Shared Space?
Best regards.
P.S.
I felt uncomfortable with the title Create immersive Unity apps. The first half of the presentation was about Unity development and Shared Space's Bounded Volume, and I felt that Bounded Volume apps are far from immersive.
Apple's definition of immersive in spatial computing was vague.
Sadao Tokuyama
https://1planet.co.jp/
https://twitter.com/tokufxug
Hi,
I have a question about Apple Vision Pro's support for Unity programmable shaders.
Shaders applied to Material are not supported.
RenderTextures are supported. (Can be used as texture input to Shader Graph for display through RealityKit.)
Regarding the above, are Shared Space, Full Space, and Full immersive space all covered?
Is Full immersive space irrelevant because it is Metal and not RealityKit?
Best regards.
Sadao Tokuyama
https://1planet.co.jp/
https://twitter.com/tokufxug
Hi,
I run full immersive in the visionOS simulator and enable full immersive, but pass-through is still enabled. same behavior as mixed.
Is there a bug in the visionOS simulator that prevents the pass-through with full immersive from being disabled?
Thanks.
Sadao Tokuyama
https://1planet.co.jp/
https://twitter.com/tokufxug
Hi,
I have a question about Immersion Style. It is about progressive.
I understand that by specifying progressive in Immersion, it is possible to mix mixed and full, but when is this used, for example, as in the WWDC23 movie where the person watching the movie on the screen gradually switches the room to space, or in the Digital Crown where the person is watching a movie on the screen and the room gradually changes to space? For example, when a person is watching a movie on the screen and the room gradually changes to space, as in the WWDC23 movie, or when the room gets darker and darker as the Digital Crown is adjusted, or when the room goes completely dark?
Please let me know if you have a video, sample code, or explanation that shows an example of progression.
By the way, is it possible to get the event of operating the Digital Crown from the application?
Thanks.
Sadao Tokuyama
https://1planet.co.jp/
https://twitter.com/tokufxug
Hi,
I have one question. The following shadow is shown on the simulator whether it is set in the ContentView of the Window in visionOS or not.
https://developer.apple.com/documentation/SwiftUI/View/shadow(color:radius:x:y:)
Even if I define the shadow in the ContentView and change the color, radius, x, and y, there is no change at all.
I think that shadow is not enabled. Is this because it is a visionOS simulator?
Best regards.
Sadao Tokuyama
https://1planet.co.jp/
https://twitter.com/tokufxug
Hello,
I am writing to you today to inquire about my application for the Apple Vision Pro Developer Kit. I submitted my application three times, but each time the application page displayed the text "%%USE_I18N_STRING_ERROR%%". After I submitted my application, the page I was redirected to said "We'll be back soon". I am not sure if my application was successful. I have not received any email confirmation.
Would you be able to check the status of my application and let me know if there is anything else I need to do?
Thank you for your time and consideration.
Sincerely,
Sadao Tokuyama
https://twitter.com/tokufxug
https://www.linkedin.com/in/sadao-tokuyama/
Hi,
I watched the WWDC23 session video, "Create 3D models for Quick Look spatial experiences."
https://developer.apple.com/videos/play/wwdc2023/10274/
In the video, I understood that the scale of models displayed using visionOS's AR Quick Look is determined by referencing the "metersPerUnit" value in USDZ files. I tried to find tools to set the "metersPerUnit" in 3D software or tools to view the "metersPerUnit" in USDZ files, but I couldn't find any. I believe adjusting the "metersPerUnit" in USDZ is crucial to achieve real-world scale when displaying models through visionOS's AR Quick Look. If anyone knows of apps or tools that can reference USDZ's "metersPerUnit" or 3D editor apps or tools that allow exporting with the "metersPerUnit" value properly reflected, I would greatly appreciate the information.
Best regards.
Sadao Tokuyama
https://twitter.com/tokufxug
https://www.linkedin.com/in/sadao-tokuyama/
The source code for visionOS's WWDC23 session, Take SwiftUI to the next dimension, suddenly makes extensive use of GestureState. However, there is no sample code that shows the full extent of GestureState, nor is there any explanation of its use in the video.
I cannot proceed with understanding unless you share information about this.
https://developer.apple.com/videos/play/wwdc2023/10113/?time=969
URL of the capture of the part of the video where GestureState is used. (An error occurred when uploading the image.)
https://imgur.com/a/ZAeWk2k
Sincerely,
Sadao Tokuyama
https://twitter.com/tokufxug
https://www.linkedin.com/in/sadao-tokuyama/
Hi
Scene Phases, but no event is issued when Alert is executed. Is this a known bug?
https://developer.apple.com/videos/play/wwdc2023/10111/?time=784
In the following video, the center value is obtained, but a compile error occurs because the center is not found.
https://developer.apple.com/videos/play/wwdc2023/10111/?time=861
GeometryReader3D { proxy in
ZStack {
Earth(
earthConfiguration: model.solarEarth,
satelliteConfiguration: [model.solarSatellite],
moonConfiguration: model.solarMoon,
showSun: true,
sunAngle: model.solarSunAngle,
animateUpdates: animateUpdates
)
.onTapGesture {
if let translation = proxy.transform(in: .immersiveSpace)?.translation {
model.solarEarth.position = Point3D(translation)
}
}
}
}
}
Also, model.solarEarth.position is Point3D. This is not a simple Entity, is it? I'm quite confused because the whole code is fragmented and I'm not even sure if it works. I'm not even sure if it's a bug or not, so it's taking me a few days to a week to investigate and verify.
Hi,
I have one question.
How do I issue MagnifyGesture's onChange event in the visionOS simulator? I have tried various operations, but the onChange event does not work.
https://developer.apple.com/videos/play/wwdc2023/10111/?time=994
@main
struct WorldApp: App {
@State private var currentStyle: ImmersionStyle = .mixed
var body: some Scene {
ImmersiveSpace(id: "solar") {
SolarSystem()
.simultaneousGesture(MagnifyGesture()
.onChanged { value in
let scale = value.magnification
if scale > 5 {
currentStyle = .progressive
} else if scale > 10 {
currentStyle = .full
} else {
currentStyle = .mixed
}
}
)
}
.immersionStyle(selection:$currentStyle, in: .mixed, .progressive, .full)
}
}
Thanks.
Hi,
I implemented it as shown in the link below, but it does not animate.
https://developer.apple.com/videos/play/wwdc2023/10080/?time=1220
The following message was displayed
No bind target found for played animation.
import SwiftUI
import RealityKit
struct ImmersiveView: View {
var body: some View {
RealityView { content in
if let entity = try? await ModelEntity(named: "toy_biplane_idle") {
let bounds = entity.model!.mesh.bounds.extents
entity.components.set(CollisionComponent(shapes: [.generateBox(size: bounds)]))
entity.components.set(HoverEffectComponent())
entity.components.set(InputTargetComponent())
if let toy = try? await ModelEntity(named: "toy_drummer_idle") {
let orbit = OrbitAnimation(
name:"orbit",
duration: 30,
axis:[0, 1, 0],
startTransform: toy.transform,
bindTarget: .transform,
repeatMode: .repeat)
if let animation = try? AnimationResource.generate(with: orbit) {
toy.playAnimation(animation)
}
content.add(toy)
}
content.add(entity)
}
}
}
}
Hello, I'm here.
I am posting this in the hope that you can give me some advice on what I would like to achieve.
What I would like to achieve is to download the USDZ 3D model from the web server within the visionOS app and display it with the Shared Space volume (volumetric) size set to fit the downloaded USDZ model.
Currently, after downloading USDZ and generating it as a Model Entity,
Using openWindow,
The Model Entity is created as a volumetric WindowGroup in the RealityViewContent of the RealityView using openWindow. The Model Entity generated by downloading USDZ is added to the RealityViewContent of the RealityView in the View called by openWindow.
The USDZ downloaded by the above method appears in the volume on visionOS without any problems. However, the size of the USDZ model to be downloaded is not uniform, so it may not fit in the volume.
I am trying to generate a WindowGroup with openWindow using Binding with the appropriate size value set to defaultSize, but I am not sure which property of ModelEntity can be set to the appropriate value for defaultSize.
The attached image does not have the correct position and I would like to place the position down if possible.
I would appreciate your advice on sizing and positioning the downloaded USDZ to fit in the volume. Incidentally, I tried a plane style window and found that it displayed a USDZ Model Entity that was much larger in scale compared to the volume, so I have decided not to support a plane style window.
If there is any information on how to properly set the position and size of the USDZ files created by visionOS and RealityKit, I would appreciate it if you could also provide it.
Best regards.
Sadao Tokuyama
https://twitter.com/tokufxug
https://1planet.co.jp/tech-blog/category/applevisionpro
Hi,
How can I check the value set for defaultSize in WindowGroup?
Does the defaultSize of WindowGroup have a minimum and maximum value? If so, could you please tell me what they are?