Hi,
is there a way in visionOS to anchor an entity to the POV via RealityKit?
I need an entity which is always fixed to the 'camera'.
I'm aware that this is discouraged from a design perspective as it can be visually distracting. In my case though I want to use it to attach a fixed collider entity, so that the camera can collide with objects in the scene.
Edit:
ARView on iOS has a lot of very useful helper properties and functions like cameraTransform (https://developer.apple.com/documentation/realitykit/arview/cameratransform)
How would I get this information on visionOS? RealityViews content does not seem offer anything comparable.
An example use case would be that I would like to add an entity to the scene at my users eye-level, basically depending on their height.
I found https://developer.apple.com/documentation/realitykit/realityrenderer which has an activeCamera property but so far it's unclear to me in which context RealityRenderer is used and how I could access it.
Appreciate any hints, thanks!
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
I've been tinkering with PortalComponent on visionOS a bit but noticed that the content of the WorldComponent is always clipped to the mesh geometry of whatever entities have the PortalComponent applied. Now I'm wondering if there is any way or trick to allow contents of the portal to peek out – similar to the Encounter Dinosaurs experience on Vision Pro (I assume it also uses PortalComponent?).
I saw that PortalComponent has a clippingPlane property (https://developer.apple.com/documentation/realitykit/portalcomponent/clippingplane-swift.property). But so far I haven't been able to achieve a perceptible visual difference with it.
If possible I would like to avoid hacky tricks using duplicate meshes or similar to achieve this.
Thanks for any hints!
Hello,
I'm getting started for my project with Xcode Cloud since I upgraded to the macOS Sequioa Beta and Xcode 16 now refuses to archive builds for TestFlight.
Somewhere very late in the build process I get the following error:
realitytool requires Metal for this operation and it is not available in this build environment
The log says this happens at:
Compile Skybox urban.skybox
My project uses RealityKit. How can I fix this issue?
Thanks!
Hello,
creating a simple-as-it-gets Slider in SwiftUI and then running that app on Mac Catalyst with the macOS idiom enabled, the app crashes:
struct ContentView: View {
@State private var sliderValue: Double = 0.4
var body: some View {
VStack {
Slider(value: $sliderValue)
}
.padding()
}
}
running this will result in an exception:
_setMinimumEnabledValue: is not supported on UISlider when running Catalyst apps in the Mac idiom. See UIBehavioralStyle for possible alternatives.
This is obviously not ideal and also apparently no documented.
Is there a workaround for this?
It used to work for on macOS Sonoma.
macOS 26 RC
Xcode 26 RC
FB20191635
Thanks!
Hi,
since iOS 15 I've repeatedly noticed the console warning »ARSessionDelegate is retaining X ARFrames. This can lead to future camera frames being dropped« even for rather simple projects using RealityKit and ARKit. Could someone from the ARKit team please elaborate what causes this warning and what can be done to avoid it?
If I remember correctly I didn't even assign an ARSessionDelegate.
Thank you!
Hello,
I have a macOS Catalyst app that I now began updating and building against the iOS 18/macOS Sequoia SDKs. Most things appear to be working just fine as before, apart from my NSToolbar.
At the root of my app I am presenting a UISplitViewController which gets a custom SidebarViewController and a UITabBarController as its viewControllers.
Then at same point in the apps lifecycle the UITabBarController presents another ViewController modally. I then associate the controllers window with a custom NSToolbar like this:
let toolbar = NSToolbar(identifier: "mainToolbar")
toolbar.displayMode = .iconAndLabel
toolbar.delegate = self
toolbar.allowsUserCustomization = false
titleBar.toolbarStyle = .automatic
titleBar.titleVisibility = .hidden
titleBar.toolbar = toolbar
I also disable automatic NSToolbar hosting via: https://developer.apple.com/documentation/uikit/uinavigationbardelegate/3987959-navigationbarnstoolbarsection (returning .none).
Now all of this worked fine on macOS Sonoma and previous versions but on Sequoia my custom toolbar refuses to show up.
My suspicion is that is has something to do with the new tab and sidebar behaviour introduced with the new SDKs (https://developer.apple.com/documentation/uikit/uinavigationbardelegate/3987959-navigationbarnstoolbarsection).
For now within my UITabBarController I was able to revert to the old look using:
if #available(iOS 18.0, *) {
mode = .tabSidebar
sidebar.isHidden = true
isTabBarHidden = true
}
This result in a look similar to the previous macOS version but my NSToolbar unfortunately remains hidden.
Is there an easy fix for this? Since I am a solo developer I would prefer to spend my available resources currently on other features and adopt the new tab/sidebars a couple months down the line.
Appreciate any help and hints, thanks!
There used to be a toolbar here on the right side. ↑
The ShaderGraph Node Blurred Background (RealityKit) – https://developer.apple.com/documentation/shadergraph/realitykit/blurred-background-(realitykit) works fine within the RealityComposer Pro 2 editor but isn't working on iOS 18 or macOS 15. Instead of the blurred content it just renders as opaque in a single color (Screenshot 2).
Interestingly it also fails to render within RealityComposer Pro when no other entities are within the scene, e.g only a background skybox set.
Expected Behavior: It would be great if this node worked the same way as it does on visionOS since this would allow for really interesting and nice effects for scenes.
Feedback ID: FB15081190
A ShaderGraphMaterial with an Occlusion Surface Output generated with RealityComposer 2 fails to load on iOS 18 and macOS 15 with the following error:
RealityFoundation.ShaderGraphMaterial.LoadError.invalidTypeFound (https://developer.apple.com/documentation/realitykit/shadergraphmaterial/loaderror/invalidtypefound)
This happens with both https://developer.apple.com/documentation/shadergraph/realitykit/occlusion-surface-(realitykit) and https://developer.apple.com/documentation/shadergraph/realitykit/shadow-receiving-occlusion-surface-(realitykit)
RealityView { content in
do {
let bgEntity = ModelEntity(mesh: .generateCone(height: 0.5, radius: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)])
bgEntity.position.z = -0.2
content.add(bgEntity)
let occlusionMaterial = try await ShaderGraphMaterial(named: "/Root/OcclusionMaterial", from: "OcclusionMaterial")
let testEntity = ModelEntity(mesh: .generateSphere(radius: 0.4), materials: [occlusionMaterial])
content.add(testEntity)
content.cameraTarget = testEntity
} catch {
print("Shader Graph Load Error:")
dump(error)
}
}
.realityViewCameraControls(.orbit)
.edgesIgnoringSafeArea(.all)
Feedback ID: FB15081296
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
Devices running iOS 18 using RealityKit do not seem to receive lighting supplied via ARKit Environment Texturing (https://developer.apple.com/documentation/arkit/arworldtrackingconfiguration/2977509-environmenttexturing).
Instead just a default IBL is used by RealityKit.
This happens with RealityView as well as ARView.
It also happens when I explicitly opt-in to environment texturing:
let worldTrackingConfig = ARWorldTrackingConfiguration()
worldTrackingConfig.environmentTexturing = .automatic
arView.session.run(worldTrackingConfig)
Even the Xcode AR Template has this issue.
I'm attaching a screenshot of the sample app running on iOS 18 where it's broken and from iOS 17 where it works as expected.
I hope this can get resolved quickly since I see it as a major regression.
Feedback ID: FB15091335
UPDATE:
It works on my older iPhone XS (iOS 18 22A5282m)
Broken on iPad Pro (11-inch) (3rd generation) (iPadOS 18.0 (22A5350a))
Maybe it's related to LiDAR?
Thank you!
iOS 17 (works):
iOS 18 (broken):
Hi,
We've been leveraging AppClips on iOS for a while now to distribute native-app quality AR experiences (utilising ARKit and RealityKit) with the accessibility of a website.
This has been a crucial differentiator for us and is a core driver for our business.
Since our authoring tools also allows to run the same AR experiences on Vision Pro it would be amazing if they could be triggered by App Clips here as well. We've got this feedback from clients and users multiple times and since there seems to already be some basic App Clips support (e.g when registering the custom lens inserts) integrated into the system we would immensely appreciate if this feature could be opened up for 3rd party developers as well.
Associated feedback ID: FB13348462
Thank you!
I have a working Xcode Cloud setup for my iOS and macOS targets, and I'm trying to add visionOS support. The issue is that Firebase requires using their source distribution for visionOS (instead of their default binary distribution).
Locally, this works by launching Xcode with:
open -a Xcode --env FIREBASE_SOURCE_FIRESTORE project.xcodeproj
For Xcode Cloud, I've added a ci_post_clone.sh script that sets this environment variable for visionOS builds:
#!/bin/bash
if [[ $CI_PRODUCT_PLATFORM == "xrOS" ]]; then
echo "Running setup for visionOS..."
export FIREBASE_SOURCE_FIRESTORE=1
fi
However, I'm getting the following error during build:
an out-of-date resolved file was detected at /.../project.xcworkspace/xcshareddata/swiftpm/Package.resolved, which is not allowed when automatic dependency resolution is disabled
So since setting FIREBASE_SOURCE_FIRESTORE=1 changes which SPM dependencies are required:
Normal setup uses: abseil-cpp-binary, grpc-binary
Source distribution needs: abseil-cpp-swiftpm, grpc-ios, boringssl-swiftpm
What's the recommended way to handle this in Xcode Cloud when maintaining builds for all platforms? Should I be using separate workflows with different branches for different platforms? Or is there a better approach?
System:
Xcode 16.2
Using SPM for dependency management
Firebase iOS SDK 10.29.0
Building for iOS, macOS, and visionOS
Thanks in advance for any guidance!
Topic:
Developer Tools & Services
SubTopic:
Xcode Cloud
Tags:
Swift Packages
Xcode
Xcode Cloud
visionOS
Sidebars for mac Catalyst apps running with UIDesignRequiresCompatibility flag render their active items with a white bg tint – resulting in labels and icons being not visible.
mac OS Tahoe 26.1 Beta 3 (25B5062e)
FB20765036
Example (Apple Developer App):
When assigning a ManipulationComponent to an Entity SceneEvents.WillRemoveEntity will be called for that Entity.
Expected Behavior: the Entity is not (even if temporarily) removed from the Scene and no SceneEvents will be triggered as a result of assigning a ManipulationComponent.
FB20872220
Hello,
in this project https://developer.apple.com/documentation/arkit/content_anchors/tracking_and_visualizing_faces there is some sample code that describes how to map the camera feed to an object with SceneKit and a shader modifier.
I would like know if there is an easy way to achieve the same thing with a CustomMaterial and RealityKit 2.
Specifically I'm interested in what would be the best way to pass in the background of the RealityKit environment as a texture to the custom shader.
In SceneKit this was really easy as one could just do the following:
material.diffuse.contents = sceneView.scene.background.contents
As the texture input for custom material requires a TextureResource I would probably need a way to create a CGImage from the background or camera feed on the fly.
What I've tried so far is accessing the captured image from the camera feed and creating a CGImage from the pixel buffer like so:
guard
let frame = arView.session.currentFrame,
let cameraFeedTexture = CGImage.create(pixelBuffer: frame.capturedImage),
let textureResource = try? TextureResource.generate(from: cameraFeedTexture, withName: "cameraFeedTexture", options: .init(semantic: .color))
else {
return
}
// assign texture
customMaterial.custom.texture = .init(textureResource)
extension CGImage {
public static func create(pixelBuffer: CVPixelBuffer) -> CGImage? {
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)
return cgImage
}
}
This seems wasteful though and is also quite slow.
Is there any other way to accomplish this efficiently or would I need to go the post processing route?
In the sample code the displayTransform for the view is also being passed as a SCNMatrix4. CustomMaterial custom.value only accepts a SIMD4 though. Is there another way to pass in the matrix?
Another idea I've had was to create a CustomMaterial from an OcclusionMaterial which already seems to contain information about the camera feed but so far had no luck with it.
Thanks for the support!
Hello,
PhysicallyBasedMaterial in RealityKit 2 contains a blending property to adjust the transparency of a material.
Is there a way to animate this over time to fade entities in and out?
I've tried the new FromToByAnimation API but could not figure out if there is a supported BindPath for the transparency.
Ideally what I would like to achieve is something similar to SceneKits SCNAction.fadeIn(duration: …) which also worked on a whole node.
I figured I could also go the route of a custom fragment shader here, though that seems overkill.
As RealityComposer also supports fade actions I would assume that this is at least supported behind the scenes.
Thanks for any help!