Hello,
PhysicallyBasedMaterial in RealityKit 2 contains a blending property to adjust the transparency of a material.
Is there a way to animate this over time to fade entities in and out?
I've tried the new FromToByAnimation API but could not figure out if there is a supported BindPath for the transparency.
Ideally what I would like to achieve is something similar to SceneKits SCNAction.fadeIn(duration: …) which also worked on a whole node.
I figured I could also go the route of a custom fragment shader here, though that seems overkill.
As RealityComposer also supports fade actions I would assume that this is at least supported behind the scenes.
Thanks for any help!
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
in this project https://developer.apple.com/documentation/arkit/content_anchors/tracking_and_visualizing_faces there is some sample code that describes how to map the camera feed to an object with SceneKit and a shader modifier.
I would like know if there is an easy way to achieve the same thing with a CustomMaterial and RealityKit 2.
Specifically I'm interested in what would be the best way to pass in the background of the RealityKit environment as a texture to the custom shader.
In SceneKit this was really easy as one could just do the following:
material.diffuse.contents = sceneView.scene.background.contents
As the texture input for custom material requires a TextureResource I would probably need a way to create a CGImage from the background or camera feed on the fly.
What I've tried so far is accessing the captured image from the camera feed and creating a CGImage from the pixel buffer like so:
guard
let frame = arView.session.currentFrame,
let cameraFeedTexture = CGImage.create(pixelBuffer: frame.capturedImage),
let textureResource = try? TextureResource.generate(from: cameraFeedTexture, withName: "cameraFeedTexture", options: .init(semantic: .color))
else {
return
}
// assign texture
customMaterial.custom.texture = .init(textureResource)
extension CGImage {
public static func create(pixelBuffer: CVPixelBuffer) -> CGImage? {
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)
return cgImage
}
}
This seems wasteful though and is also quite slow.
Is there any other way to accomplish this efficiently or would I need to go the post processing route?
In the sample code the displayTransform for the view is also being passed as a SCNMatrix4. CustomMaterial custom.value only accepts a SIMD4 though. Is there another way to pass in the matrix?
Another idea I've had was to create a CustomMaterial from an OcclusionMaterial which already seems to contain information about the camera feed but so far had no luck with it.
Thanks for the support!
Hello,
I have a cylinder with an Unlit Material and a base color set. Now I want to apply a gradient as the alpha mask so I get this kind of halo GTA-like checkpoint look.
The code:
var baseMaterial = UnlitMaterial(color: UIColor.red)
baseMaterial.blending = .transparent(opacity: .init(scale: 100, texture: .init(maskTextureResource))) // maskTextureResource is the gradient mask
baseMaterial.opacityThreshold = 0
This works but unfortunately leads to some ugly visible gradient banding. I've also tried to play with the scale of the blending texture but that did not help.
As an alternative approach I tried to solve this via a custom surface shader.
Code below:
[[visible]]
void gradientShader(realitykit::surface_parameters params) {
auto surface = params.surface();
float2 uv = params.geometry().uv0();
float h = 0.5; // adjust position of middleColor
half startAlpha = 0.001;
half middleAlpha = 1;
half endAlpha = 0.001;
half alpha = mix(mix(startAlpha, middleAlpha, half(uv.y/h)), mix(middleAlpha, endAlpha, half((uv.y - h)/(1.0 - h))), half(step(h, uv.y)));
surface.set_emissive_color(half3(params.material_constants().emissive_color()));
surface.set_base_color(half3(params.material_constants().base_color_tint()));
surface.set_opacity(alpha);
}
The result looks really nice and smooth but unfortunately this now also culls the inner part of the cylinder. Even on the semitransparent parts.
What I want is a having the effect applied on both the outer and inner part of the cylinder being visible. So the transparent part of the outside allows you to seethrough to the inside.
I've got this working by using a PhysicallyBasedMaterial instead of an UnlitMaterial (which does not support blending out of the box) but again had to issue with the banding.
On my Custom Material faceCulling is set to .none.
Here is how it currently looks – as you can see in the left one the alpha mask is not smooth and has banding artefacts:
Thank you for any help!
Hello,
I have a usecase where I need to to download and compile metal shaders on demand as strings or .metal files.
These should then be used for CustomMaterials and/or postprocessing within RealityKit.
Essentially this boils down to having raw source code that needs to be compiled at runtime.
My plan was to use the method makeLibrary(source:options:completionHandler:) to accomplish this.
The problem is that I get the following error during compilation:
RealityKitARExperienceAssetProvider: An error occured while trying to compile shader library »testShaderLibrary«
- Error Domain=MTLLibraryErrorDomain Code=3 "program_source:2:10: fatal error: 'RealityKit/RealityKit.h' file not found
#include <RealityKit/RealityKit.h>
My code for creating the library looks like this (simplified example):
let librarySourceString: String = """
#include <metal_stdlib>
#include <RealityKit/RealityKit.h>
using namespace metal;
[[visible]]
void mySurfaceShader(realitykit::surface_parameters params)
{
params.surface().set_base_color(half3(1, 1, 1));
}
"""
mtlDevice.makeLibrary(source: librarySourceString, options: nil) { library, error in
if let error = error {
dump(error)
return
}
// do something with library
}
So I'm wondering if there's a way to tell the metal compiler how to resolve this reference to the RealityKit header file?
Would I need to replace that part of the source string maybe with an absolute path to the RealityKit framework (if so – how would I get this at runtime)?
Appreciate any hints - thanks!
Hi,
is there a way to force RealityKit to compile/prewarm and cache all shaders that will be used within a Scene in advance – ideally in the background?
This would be useful for adding complex models to the scene which sometimes can cause quite a couple dropped frames even on the newest devices (at least I assume the initial delay when displaying them is caused by the shader compilation) but also for CustomMaterials. Note this also happens with models that are loaded asynchronously.
Thanks!
I've just implemented some necessary fixes to our app to ensure compatibility with iOS 15 and ARKit 2.
Now when I deploy the app via XCode 13 RC (Version 13.0 (13A233)) I get the following crash right at launch:
dyld: Library not loaded: /System/Library/Frameworks/RealityFoundation.framework/RealityFoundation
Referenced from: /private/var/containers/Bundle/Application/…
Reason: image not found
dyld: launch, loading dependent libraries
DYLD_LIBRARY_PATH=/usr/lib/system/introspection
DYLD_INSERT_LIBRARIES=/Developer/usr/lib/libBacktraceRecording.dylib:/Developer/usr/lib/libMainThreadChecker.dylib:/Developer/Library/PrivateFrameworks/DTDDISupport.framework/libViewDebuggerSupport.dylib
Is this a known issue?
I've already tried deleting Derived Data and clearing the project but the problem persists.
The minimum deployment target is iOS 13 for the main app and iOS 14 for the AppClip.
All iOS 15 related fixes are wrapped into if #available(iOS 15.0, *) { … }
This is a pretty major problem for us as we now can't send out Testflights or upload to AppStoreConnect for Monday.
Thanks!
Hello,
I've been looking all over the place but so far I haven't found a trivial way to grab ARView's current timestamp. So basically the elapsed time since the scene started rendering.
I can access that in the Surface and Geometry shaders but I would like to pass a timestamp as a parameter in order to drive shader animations. I feel that's more efficient than injecting animation progress manually on every frame, especially if there are lot's of objects with that shader.
So far what I've done is subscribing to the Scenes Update event and using the delta time to calculate the elapsed time myself. But this is quite error prone and tends to break when I present the scene a second time (e.g closing and reopening to AR experience).
The only other option I found was using a render callback and to grab the time property from the PostProcessContext. That works well but do I really have to go that route?
Would be great if there is an easy way to achieve this.
Pretty much an equivalent to this: https://developer.apple.com/documentation/scenekit/scnscenerenderer/1522680-scenetime
NOTE: I'm not looking for the timestamp of the ARSessions current frame.
Thank you!
Hello,
in our app we are downloading some user generated content (.reality files and USDZs) and displaying it within the app.
This worked without issues in iOS 14 but with iOS 15 (release version) there have been a lot of issues with certain .reality files. As far as I can see USDZ files still work.
I've created a little test project and the error message log is not really helpful.
2021-10-01 19:42:30.207645+0100 RealityKitAssetTest-iOS15[3239:827718] [Assets] Failed to load asset of type 'RealityFileAsset', error:Could not find archive entry named assets/Scéna17_9dfa3d0.compiledscene.
2021-10-01 19:42:30.208097+0100 RealityKitAssetTest-iOS15[3239:827598] [Assets] Failed to load asset path '#18094855536753608259'
2021-10-01 19:42:30.208117+0100 RealityKitAssetTest-iOS15[3239:827598] [Assets] AssetLoadRequest failed because asset failed to load '#18094855536753608259'
2021-10-01 19:42:30.307040+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
2021-10-01 19:42:30.307608+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
2021-10-01 19:42:30.307712+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
2021-10-01 19:42:30.307753+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
2021-10-01 19:42:30.307790+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
2021-10-01 19:42:30.307907+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
2021-10-01 19:42:30.307955+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
2021-10-01 19:42:30.308155+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
2021-10-01 19:42:30.308194+0100 RealityKitAssetTest-iOS15[3239:827598] throwing -10878
▿ Failed to load loadRequest.
- generic: "Failed to load loadRequest."
Basic code structure that is used for loading:
cancellable = Entity.loadAsync(named: entityName, in: .main)
.sink { completion in
switch completion {
case .failure(let error):
dump(error)
print("Done")
case .finished:
print("Finished loading")
}
} receiveValue: { entity in
print("Entity: \(entity)")
}
Is there anyway to force it to load in a mode that enforces compatibility?
As mentioned this only happens on iOS 15. Even ARQuickLook can't display the files anymore (no issues on iOS 14).
Thanks for any help!
Hello,
On iOS 16 when I'm retrieving an existing material from a model entity and update it's blending property to .transparent(opacity: …) the color or baseColor texture get's removed after reassigning the updated material.
My usecase is that I want to fade in a ModelEntity through a CustomSystem and therefore need to repeatedly reassign the opacity value. I've tested this with UnlitMaterial and PhysicallyBasedMaterial – both suffer from this issue.
On iOS 15 this works as expected. Please let me know if there is any workaround, as this seems to me like a major regression and ideally I need this to work once iOS 16 gets released to the public.
The radar number including a sample project is: FB11420976
Thank you!
Hi,
please let me know if I should rather file feedback for this, but I figured it's worth to flag it one way or an another:
Test Xcode Version: 14.0 beta 6 (14A5294g)
1. Project »Altering RealityKit Rendering with Shader Functions«
This project crashes right away when running it on a device (iOS 15 and 16).
Screenshot:
2. Project »Altering RealityKit Rendering with Shader FunctionsUsing object capture assets in RealityKit«
Suffers from pretty bad performance when run on a device – barely scratching 20-25fps on an iPhone 12 Pro. iPhone XS even less.
Screenshot:
As these are official sample project I feel like they should work flawlessly out of the box.
Best
Arthur
Hello,
after a recent Apple Maps update last week the satellite imagery data now looks a lot worse than before.
What previously looked lifelike and lush now looks very sad and wintery. Also the contrast seems way to extreme.
Attached is a sample image.
FB: FB11716831
Any chance that this could be reverted to the old version?
Hello,
I recently converted from manual dictionary-based JSON Serialisation to Codable and noticed that this resulted in a pretty significant growth in binary size (I looked that up via App Thinning Size Report).
The difference from codable to non codable is ~800KB.
As our app also supports App Clips I now can't fulfill the 10MB Universal Bundle Size limit anymore.
Is there any way I can make this a little more lean? Would it help to manually implement all Codable methods instead of relying on compiler synthetization?
Thanks for any hints!
Hello,
I've done a couple tests and noticed that when I run an ARKit Session with for example a world- or geo-tracking configuration which has environment texturing set to either .manual or .automatic and I walk around with the device for an extended distance there is a pretty noticeable increase in memory usage. I'm not implying that it's a leak but it seems like the system creates lots and lots of environment probes and does not remove older ones.
An example:
This is a barebones Xcode RealityKit starter project that spawns a couple cubes with an ARGeoTracking configuration. After ~7mins of walking around (roughly a distance of 100-200 meters) it uses an additional 300MBs of RAM.
I've seen cases where users walked around for a while and the app eventually crashed because of this.
When environment texturing is disabled RAM usage pretty much stays the same, no matter how far I walk.
Is there a recommended way on how to handle this? Should I remove probes manually after a while or eventually disable environment texturing altogether at some point (will that preserve the current cube map)?
I would appreciate any guidance.
With the Xcode example project you can easily recreate the issue by modifying it just a little and then walk around for a while with statistics enabled.
class ViewController: UIViewController {
@IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
arView.automaticallyConfigureSession = false
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
let config = ARWorldTrackingConfiguration()
config.environmentTexturing = .manual
arView.session.run(config)
// Load the "Box" scene from the "Experience" Reality File
let boxAnchor = try! Experience.loadBox()
// Add the box anchor to the scene
arView.scene.anchors.append(boxAnchor)
}
}
Hello,
is there a recommended way to render Menu items, e.g in a SwiftUI ContextMenu with icon (SFSymbols)?
Let's say I have the following setup:
Both buttons render fine on native macOS (e.g Sonoma) but Catalyst refuses to render the symbol at all. I tried every possible combination I could think off. The only way I found was to directly copy and paste a symbol from the SF symbols app and inline it with the label string as unicode. Unfortunately I have a couple custom SF symbols so this isn't really an option for me.
I feel like this is a perfectly valid usecase, as it makes the menu visually a lot easier scannable.
With UIKit and Ventura this at least worked for Menubar items but now also seems broken on Sonoma.
I would greatly appreciate any hints. Thanks!
Hello,
I have a macOS Catalyst app that I now began updating and building against the iOS 18/macOS Sequoia SDKs. Most things appear to be working just fine as before, apart from my NSToolbar.
At the root of my app I am presenting a UISplitViewController which gets a custom SidebarViewController and a UITabBarController as its viewControllers.
Then at same point in the apps lifecycle the UITabBarController presents another ViewController modally. I then associate the controllers window with a custom NSToolbar like this:
let toolbar = NSToolbar(identifier: "mainToolbar")
toolbar.displayMode = .iconAndLabel
toolbar.delegate = self
toolbar.allowsUserCustomization = false
titleBar.toolbarStyle = .automatic
titleBar.titleVisibility = .hidden
titleBar.toolbar = toolbar
I also disable automatic NSToolbar hosting via: https://developer.apple.com/documentation/uikit/uinavigationbardelegate/3987959-navigationbarnstoolbarsection (returning .none).
Now all of this worked fine on macOS Sonoma and previous versions but on Sequoia my custom toolbar refuses to show up.
My suspicion is that is has something to do with the new tab and sidebar behaviour introduced with the new SDKs (https://developer.apple.com/documentation/uikit/uinavigationbardelegate/3987959-navigationbarnstoolbarsection).
For now within my UITabBarController I was able to revert to the old look using:
if #available(iOS 18.0, *) {
mode = .tabSidebar
sidebar.isHidden = true
isTabBarHidden = true
}
This result in a look similar to the previous macOS version but my NSToolbar unfortunately remains hidden.
Is there an easy fix for this? Since I am a solo developer I would prefer to spend my available resources currently on other features and adopt the new tab/sidebars a couple months down the line.
Appreciate any help and hints, thanks!
There used to be a toolbar here on the right side. ↑