The new Mac virtual display feature on visionOS 2 offers a curved/panoramic window. I was wondering if this is simply a property that can be applied to a window, or if it involves an immersive mode or SceneKit/RealityKit?
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am getting the error "Initializing hosting entity without a context" in the console when I build and run my game in XCode 16.0 beta, targeting Vision Pro OS 2.0 (22N5252n).
Not sure where the error is originating.
Is there a way to provide CLHeading in visionOS so I can more accurately understand the direction in which a user is facing?
Topic:
Spatial Computing
SubTopic:
General
Thank you again for pushing the web forward in VisionOS 2, super exciting!
The latest WWDC24 video touched on VR experiences for VisionOS2.0 using WebXR, however there was no mention of passthrough AR experiences.
Samples such as this one are not supported:
https://immersive-web.github.io/webxr-samples/immersive-ar-session.html
In Settings > Safari, there is a feature flag for the AR WebXR module, but enabling it did not seem to change anything.
Is this the expected behavior at this time? Any developer preview(s) we could try?
I downloaded Xcode 16 and updated my macOS to 15, but I keep getting this error when trying to build the game in simulator or in device
[xrsimulator] Exception thrown: The operation couldn’t be completed. (realitytool.RKAssetsCompiler.RKAssetsCompilerError error 3.)
I tested the new visionOS object tracking and it worked really well.
I have created a reference object using Create ML and it really detected the object.
My question is: does it work also with iOS and, if not right now, is it planned to work in mobile iOS in the future?
Topic:
Spatial Computing
SubTopic:
ARKit
For visionOS 2.0+, it has been announced the object tracking feature. Is there any support for PolySpatial in Unity or is it only available in Swift and Xcode?
When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code:
RealityView { content in
if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(model)
}
}
Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add?
Note:I have seen https://developer.apple.com/videos/play/wwdc2024/10101/, but I don't know much about it.
Hi, I'm very new to 3D and am currently porting a SwiftUI iOS app to visionOS 2.0.
I saw WWDC24 feature Blender in multiple spatial videos, and have begun integrating Blender models and animations into my VisionOS app (I would also like to integrate skeletons and programmatic rigging, more on that later).
I'm wondering if there are “Best Practices” for this workflow - from Blender to USD to RCP 2.0 to visionOS 2 in Xcode. I’ve cobbled together the following that has some obvious holes:
I’ve been able to find some pre-rigged and pre-animated models online that can serve as a great starting point. As a reference, here is a free model from SketchFab - a simple rigged skeleton with 6 built in animations:
https://sketchfab.com/3d-models/skeleton-character-low-poly-8856e0138f424d68a8e0b40e185951f6
When exporting to USD from Blender, I haven’t been able to export more than one animation per USD file. Is there a workflow to export multiple animations in a single USDC file, or is this just not possible?
As a temporary workaround, here is a python script I’ve been using to loop through all Blender animations, and export a model for each animation:
import bpy
import os
# Set the directory where you want to save the USD files
output_directory = “/path/to/export”
# Ensure the directory exists
if not os.path.exists(output_directory):
os.makedirs(output_directory)
# Function to export current scene as USD
def export_scene_as_usd(output_path, start_frame, end_frame):
bpy.context.scene.frame_start = start_frame
bpy.context.scene.frame_end = end_frame
# Export the scene as a USD file
bpy.ops.wm.usd_export(
filepath=output_path,
export_animation=True
)
# Save the current scene name
original_scene = bpy.context.scene.name
# Iterate through each action and export it as a USD file
for action in bpy.data.actions:
# Create a new scene for each action
bpy.context.window.scene = bpy.data.scenes[original_scene].copy()
new_scene = bpy.context.scene
# Link the action to all relevant objects
for obj in new_scene.objects:
if obj.animation_data is not None:
obj.animation_data.action = action
# Determine the frame range for the action
start_frame, end_frame = action.frame_range
# Export the scene as a USD file
output_path = os.path.join(output_directory, f"{action.name}.usdc")
export_scene_as_usd(output_path, int(start_frame), int(end_frame))
# Delete the temporary scene to free memory
bpy.data.scenes.remove(new_scene)
print("Export completed.")
I have also been able to successfully export rigging armatures as a single Skeleton - each “bone” showing getting imported into Reality Composer Pro 2.0 when exporting/importing manually.
I would like to have all of these animations available in a single scene to be used in a RealityView in visionOS - so I have placed all animation models in a RCP scene and created named Timeline Action animations for each, showing the correct model and hiding the rest when triggering specific animations.
I apply materials/textures to each so they appear the same, using Shader Graph.
Then in SwiftUI I use notifications (as shown here - https://forums.developer.apple.com/forums/thread/756978) to trigger each RCP Timeline Action animation from code.
Two questions:
Is there a better way than to have multiple models of the same skeleton - each with a different animation - in a scene to be able to trigger multiple animations? Or would this require recreating Blender animations using skeleton rigging and keyframes from within RCP Timelines?
If I want to programmatically create custom animations and move parts of the skeleton/armatures - do I need to do this by defining custom components in RCP, using IKRig and define movement of each of the “bones” in Xcode?
I’m looking for any tips/tricks/workflow from experienced engineers or 3D artists that can create a more efficient/optimized workflow using Blender, USD, RCP 2 and visionOS 2 with SwiftUI.
Thanks so much, I appreciate any help! I am very excited about all the new tools that keep evolving to make spatial apps really fun to build!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
visionOS
I started a visionOS app using Apple's new "App Environment" template, and when I looked at the UV mapping for the half SkyDome, the bottom edge had a UV 'Y' value of 0.318.
Naively, I had assumed the bottom edge of a half dome would have a UV 'Y' value of 0.5 (half way up the texture map).
Is this the standard UV mapping for half a SkyDome?
It has caused some issues when I've applied some HDRIs.
In the Discover RealityKit APIs for iOS, macOS, and visionOS presentation, there was a slide at the end highlighting new features not covered in the video. One of them was surface subdivision, but I have not been able to find any documentation or APIs that support this feature. Does anyone have any further details or how this works in RealityKit?
Hello,
I have an iOS app that is using SwiftUI but the gesture code is written using UIGestureRecognizer. When I run this app on visionOS using the "Designed for iPad" destination and try to use any of my gestures I see this warning in the console:
Trying to convert coordinates between views that are in different UIWindows, which isn't supported. Use convertPoint:fromCoordinateSpace: instead.
But I don't see any visible problems with the gestures.
I see this warning printed out after the gesture takes place but before any of our gesture methods get kicked off. So now I am wondering if this is something we need to deal with or some internal work that needs to happen in UIKit.
Does anyone have any thoughts on this?
Hello! I’ve got a USDZ export from Maya pipeline working with animation, and they load up nicely in the Vision Pro.
I’ve been checking out the animated sample files in the Augmented Reality/Quick Loop sample page, specifically, the first three at the top of the page.
I would like to know how they are created. I’m a 3d modeler and animator, not a programmer, so dipping my toe in RCP and Xcode/SwiftUI, but could used some informative tutorials for proper workflow. For example, in the Lunar Rover sample, there are lines emanating from the model, then text windows appear. Would I need to create all these extras inside Reality Composer Pro? I’d like to start creating immersive, narrative experiences (both in a volume, and fully immersive) but for prototyping, I want to learn the proper way to add this type of functionality. I think I remember seeing something to do with “schemas” involved. I’m assuming there might be some coding to setup in RCP for when items are selected, then an associated animation is triggered. Can anyone point me towards the relevant documentation to help me get started? Remember, I don’t code. ;)
Here are my recent Vision Pro experimentations.
https://youtube.com/playlist?list=PLCH753rZ9r6eqXxpIemaSlcyYxjFgR210&si=P_7AY2aL97Upm61i
I’m also proficient with Unreal Engine, but getting content packaged and over to AVP is still not ready for prime time, so i’m exploring the native approach.
Thanks for helping point me in the right direction!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
I'm having trouble pairing my apple vision pro to my macbook pro M3, my macbook pro is on sonoma 14.6 and i have tested pairing a visionOS1.2 and 2.0 vision pro but it still doesn't work, i have a mac mini that pairs and connects fine to the headsets and those are the steps i tried to do on vision pro and macbook pro to pair them together until now but with no success :
On the same windows wifi hotspot
On the same iPhone hotspot
On an other wifi hotspot
Tried to clear remote devices, still not recognized
tried to turn off and turn on developper mode still nothing
tried to reset network parameters
tried to restart headset
tried to restart Xcode
tried to restart mac
just after restart the headset showed up and i clicked pair and typed in the code but then the headset was still in "disconnected" and couldn't connect to mac
tried to restart mac and headset
tried to rename headset
tried to switch mac
tried 1 headset on at a time
tried to clean build folder
deleted contents of ~/Library/Developer/Xcode/DerivedData
tried sudo defaults write "/Library/Preferences/com.apple.mDNSResponder.plist NoMulticastAdvertisements" -bool true
tried to deactivate the firewall
I have a visionOS app that plays audio using AVAudioEngine and presents both a window and an immersive space. If I close the window, the audio session gets interrupted and attempting to restart the session and audio engine has no effect. I need to dismiss the app, then reopen it, which reopens the main window, in order for audio to start playing again.
This is in all visionOS 2 betas. Note that I have background audio enabled for my app.
Hello All,
I'm desperate to found a solution and I need your help please.
I've create a simple cube in Vision OS. I can get it by hand (close my hand on it) and move it pretty where I want. But, I would like to throw it (exemple like a basket ball). Not push it, I want to have it in hand and throw it away of me with a velocity and direction = my hand move (and finger opened to release it).
Please put me on the wait to do that.
Cheers and thanks
Mathis
Topic:
Spatial Computing
SubTopic:
ARKit
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug.
After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere.
This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
Hello experts, and question seekers,
I have been trying to get Gaussian splats working with RealityKit, however it seems not to work out for me.
The library I use for Gaussian splatting: https://github.com/scier/MetalSplatter
My idea was to use the renderers provided by RealityKit (aka RealityRenderer) https://developer.apple.com/documentation/realitykit/realityrenderer and the renderer provided by MetalSplatter (aka. SplatRenderer) https://github.com/scier/MetalSplatter/blob/main/MetalSplatter/Sources/SplatRenderer.swift
Then with a custom render pipeline, I would be able to compose the outputs of the renderers, enabling the possibility, for example to build immersive scenery with realistic environment scans, as Gaussian splats, and RealityKit to provide the necessary features to build extra scenery around Gaussian splats, eg. dynamic 3D models inside Gaussian splats.
However the problem is, as of now I am not able to do that with the current implementation of RealityRenderer.
It seems to be, that first RealityRenderer is supposed to be an API, just to render colour information onto a texture, which in first glance might be useful, but misses important information, such as for example depth, and stencil information.
Second issue is, even with that in mind, currently I am not able to execute RealityRenderer.updateAndRender, due to the following error messages:
Could not resolve material name 'engine:BuiltinRenderGraphResources/Common/realityRendererBackground.rematerial' in bundle at '/Users//Library/Developer/CoreSimulator/Devices//data/Containers/Bundle/Application//.app'. Loading via asset path.
exiting spatial tracking service update thread because wait returned 37”
I was able to build a custom Metal view with UIViewRepresentable, MTKView, and MTKViewDelegate, enabling me to build a custom rendering pipeline, by utilising some of the Metal developer workflows.
Reference: https://developer.apple.com/documentation/xcode/metal-developer-workflows/
Inside draw(in view: MTKView), in a class derived by MTKViewDelegate:
guard let currentDrawable = view.currentDrawable else {
return
}
let realityRenderer = try! RealityRenderer()
try! realityRenderer.updateAndRender(deltaTime: 0.0, cameraOutput: .init(.singleProjection(colorTexture: currentDrawable.texture)), whenScheduled: { realityRenderer in
print("Rendering scheduled")
}, onComplete: { RealityRenderer in
print("Rendering completed")
})
Can you please tell me, what I am doing wrong?
Is there any solution, that enables me to use RealityKit with for example Gaussian splats?
Any help is greatly appreciated.
All the best,
Ethem Kurt
I tried "WWDC24: Build compelling spatial photo and video experiences | Apple" and it can successfully capture spatial video.
But I found the video by my app differs from the iPhone build-in camera app in:
Videos captured with the iPhone's build-in camera app tend to have a more natural or warmer tone, while videos taken with my app appear whiter or cooler in color temperature.
In videos recorded using the iPhone's built-in camera app, the left eye image is typically sharper than the right eye image. However, in my app, this is reversed: the right eye image is clearer than the left eye image.
I've noticed that when I cover the wide-angle lens while shooting, the entire preview screen in my app becomes brighter. However, this doesn't occur when using the iPhone's built-in camera app.
Is there any api or parameters to make my app more close to the iPhone build-in app? I have tried "whiteBalanceMode" and "exposureMode" but no luck.
It's all about notifications to trigger actions from RCP's new Timeline system. From Compose interactive 3D content in Reality Composer Pro I am actually starting to confuse why there was need to use Entity.applyTapForBehaviors in code to trigger content in Behaviors Component. Simply because in Behaviors Component, we have chosen OnTap to allow a "Tap Notification" to trigger our action (on a selected target object).
Then I guess by selecting OnCollision this trigger, I should write something like CollisionEvent.entityA.applyCollisionForBehaviors, which we don't have. And ofc the collision on my object won't trigger this action (because I only did things in RCP not in code).
Ignoring this post has pointed out we could use Behaviors Component's OnNotification to trigger something for now.
I found that I could still use OnTap trigger but actually put my code Entity.applyTapForBehaviors under my subscribed collision's begin event. That actually works better than OnCollision
So what is the design principles here? And how could I trigger a collision notification to let my Behaviors Component's OnCollision actually works?