Post

Replies

Boosts

Views

Activity

Transparent blending with gradient texture leads to color banding (RealityKit 2)
Hello, I have a cylinder with an Unlit Material and a base color set. Now I want to apply a gradient as the alpha mask so I get this kind of halo GTA-like checkpoint look. The code: var baseMaterial = UnlitMaterial(color: UIColor.red) baseMaterial.blending = .transparent(opacity: .init(scale: 100, texture: .init(maskTextureResource))) // maskTextureResource is the gradient mask baseMaterial.opacityThreshold = 0 This works but unfortunately leads to some ugly visible gradient banding. I've also tried to play with the scale of the blending texture but that did not help. As an alternative approach I tried to solve this via a custom surface shader. Code below: [[visible]] void gradientShader(realitykit::surface_parameters params) {     auto surface = params.surface();     float2 uv = params.geometry().uv0();     float h = 0.5; // adjust position of middleColor     half startAlpha = 0.001;     half middleAlpha = 1;     half endAlpha = 0.001;     half alpha = mix(mix(startAlpha, middleAlpha, half(uv.y/h)), mix(middleAlpha, endAlpha, half((uv.y - h)/(1.0 - h))), half(step(h, uv.y))); surface.set_emissive_color(half3(params.material_constants().emissive_color())); surface.set_base_color(half3(params.material_constants().base_color_tint()));     surface.set_opacity(alpha); } The result looks really nice and smooth but unfortunately this now also culls the inner part of the cylinder. Even on the semitransparent parts. What I want is a having the effect applied on both the outer and inner part of the cylinder being visible. So the transparent part of the outside allows you to seethrough to the inside. I've got this working by using a PhysicallyBasedMaterial instead of an UnlitMaterial (which does not support blending out of the box) but again had to issue with the banding. On my Custom Material faceCulling is set to .none. Here is how it currently looks – as you can see in the left one the alpha mask is not smooth and has banding artefacts: Thank you for any help!
5
0
1.9k
Sep ’21
Create MTLLibrary from raw String for use within RealityKit
Hello, I have a usecase where I need to to download and compile metal shaders on demand as strings or .metal files. These should then be used for CustomMaterials and/or postprocessing within RealityKit. Essentially this boils down to having raw source code that needs to be compiled at runtime. My plan was to use the method makeLibrary(source:options:completionHandler:) to accomplish this. The problem is that I get the following error during compilation: RealityKitARExperienceAssetProvider: An error occured while trying to compile shader library »testShaderLibrary« - Error Domain=MTLLibraryErrorDomain Code=3 "program_source:2:10: fatal error: 'RealityKit/RealityKit.h' file not found #include <RealityKit/RealityKit.h> My code for creating the library looks like this (simplified example): let librarySourceString: String = """ #include <metal_stdlib> #include <RealityKit/RealityKit.h> using namespace metal; [[visible]] void mySurfaceShader(realitykit::surface_parameters params) {     params.surface().set_base_color(half3(1, 1, 1)); } """ mtlDevice.makeLibrary(source: librarySourceString, options: nil) { library, error in     if let error = error {         dump(error) return     }     // do something with library } So I'm wondering if there's a way to tell the metal compiler how to resolve this reference to the RealityKit header file? Would I need to replace that part of the source string maybe with an absolute path to the RealityKit framework (if so – how would I get this at runtime)? Appreciate any hints - thanks!
1
0
1.8k
Nov ’21
Precompile/prewarm shaders to avoid jank
Hi, is there a way to force RealityKit to compile/prewarm and cache all shaders that will be used within a Scene in advance – ideally in the background? This would be useful for adding complex models to the scene which sometimes can cause quite a couple dropped frames even on the newest devices (at least I assume the initial delay when displaying them is caused by the shader compilation) but also for CustomMaterials. Note this also happens with models that are loaded asynchronously. Thanks!
1
0
1.2k
Nov ’21
ARView Frame Timestamp/Elapsed Time
Hello, I've been looking all over the place but so far I haven't found a trivial way to grab ARView's current timestamp. So basically the elapsed time since the scene started rendering. I can access that in the Surface and Geometry shaders but I would like to pass a timestamp as a parameter in order to drive shader animations. I feel that's more efficient than injecting animation progress manually on every frame, especially if there are lot's of objects with that shader. So far what I've done is subscribing to the Scenes Update event and using the delta time to calculate the elapsed time myself. But this is quite error prone and tends to break when I present the scene a second time (e.g closing and reopening to AR experience). The only other option I found was using a render callback and to grab the time property from the PostProcessContext. That works well but do I really have to go that route? Would be great if there is an easy way to achieve this. Pretty much an equivalent to this: https://developer.apple.com/documentation/scenekit/scnscenerenderer/1522680-scenetime NOTE: I'm not looking for the timestamp of the ARSessions current frame. Thank you!
0
0
996
Jan ’22
Updating blending property to .transparent(opacity: …) removes material color texture (iOS 16 Beta)
Hello, On iOS 16 when I'm retrieving an existing material from a model entity and update it's blending property to .transparent(opacity: …) the color or baseColor texture get's removed after reassigning the updated material. My usecase is that I want to fade in a ModelEntity through a CustomSystem and therefore need to repeatedly reassign the opacity value. I've tested this with UnlitMaterial and PhysicallyBasedMaterial – both suffer from this issue. On iOS 15 this works as expected. Please let me know if there is any workaround, as this seems to me like a major regression and ideally I need this to work once iOS 16 gets released to the public. The radar number including a sample project is: FB11420976 Thank you!
4
0
1.4k
Sep ’22
RealityKit Sample Project Issues; Performance and Crashes
Hi, please let me know if I should rather file feedback for this, but I figured it's worth to flag it one way or an another: Test Xcode Version: 14.0 beta 6 (14A5294g) 1. Project »Altering RealityKit Rendering with Shader Functions« This project crashes right away when running it on a device (iOS 15 and 16). Screenshot: 2. Project »Altering RealityKit Rendering with Shader FunctionsUsing object capture assets in RealityKit« Suffers from pretty bad performance when run on a device – barely scratching 20-25fps on an iPhone 12 Pro. iPhone XS even less. Screenshot: As these are official sample project I feel like they should work flawlessly out of the box. Best Arthur
3
0
1.1k
Sep ’22
Satellite Map Imagery Regression in Berlin Area (Germany)
Hello, after a recent Apple Maps update last week the satellite imagery data now looks a lot worse than before. What previously looked lifelike and lush now looks very sad and wintery. Also the contrast seems way to extreme. Attached is a sample image. FB: FB11716831 Any chance that this could be reverted to the old version?
0
0
1.1k
Oct ’22
Codable Conformance results in significant binary size increase
Hello, I recently converted from manual dictionary-based JSON Serialisation to Codable and noticed that this resulted in a pretty significant growth in binary size (I looked that up via App Thinning Size Report). The difference from codable to non codable is ~800KB. As our app also supports App Clips I now can't fulfill the 10MB Universal Bundle Size limit anymore. Is there any way I can make this a little more lean? Would it help to manually implement all Codable methods instead of relying on compiler synthetization? Thanks for any hints!
0
0
872
Nov ’22
Environment Texturing – Reflection Probe Management
Hello, I've done a couple tests and noticed that when I run an ARKit Session with for example a world- or geo-tracking configuration which has environment texturing set to either .manual or .automatic and I walk around with the device for an extended distance there is a pretty noticeable increase in memory usage. I'm not implying that it's a leak but it seems like the system creates lots and lots of environment probes and does not remove older ones. An example: This is a barebones Xcode RealityKit starter project that spawns a couple cubes with an ARGeoTracking configuration. After ~7mins of walking around (roughly a distance of 100-200 meters) it uses an additional 300MBs of RAM. I've seen cases where users walked around for a while and the app eventually crashed because of this. When environment texturing is disabled RAM usage pretty much stays the same, no matter how far I walk. Is there a recommended way on how to handle this? Should I remove probes manually after a while or eventually disable environment texturing altogether at some point (will that preserve the current cube map)? I would appreciate any guidance. With the Xcode example project you can easily recreate the issue by modifying it just a little and then walk around for a while with statistics enabled. class ViewController: UIViewController {     @IBOutlet var arView: ARView!     override func viewDidLoad() {         super.viewDidLoad()         arView.automaticallyConfigureSession = false     }     override func viewDidAppear(_ animated: Bool) {         super.viewDidAppear(animated)         let config = ARWorldTrackingConfiguration()         config.environmentTexturing = .manual         arView.session.run(config)         // Load the "Box" scene from the "Experience" Reality File         let boxAnchor = try! Experience.loadBox()         // Add the box anchor to the scene         arView.scene.anchors.append(boxAnchor)     } }
0
0
1.3k
Dec ’22
OcclusionMaterial renders plain black when custom skybox environment is set
Using OcclusionMaterial on macOS and iOS works fine in Non-AR mode when I set the background to just a simple color (https://developer.apple.com/documentation/realitykit/arview/environment-swift.struct/color) but when I set a custom skybox (https://developer.apple.com/documentation/realitykit/arview/environment-swift.struct/background-swift.struct/skybox(_:)) the OcclusionMaterial renders as fully black. I would expect it to properly occlude the content and show through the skybox behind it. This happens with box ARView and RealityView. On current iOS/macOS Betas as well as on older systems, e.g iOS 17 and macOS Sonoma. Feedback ID: FB15081053
0
0
450
Sep ’24
Symbol not found: _$sSo22CLLocationCoordinate2DVSE12CoreLocationMc when building for visionOS 2.5 with Xcode 16.3
Hello, I'm encountering a runtime crash when building my visionOS app with Xcode 16.3 for visionOS 2.5. Our existing AppStore/Testflight app is also instantly crashing on visionOS 2.5 when opened but works fine on e.g visionOS 2.4. The app builds successfully but crashes on launch with this symbol lookup error (slightly adjusted because the forum complained regarding sensitive data): Symbol not found: _$sSo22CLLocationCoordinate2DVSE12CoreLocationMc Referenced from: <XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX> /private/var/containers/Bundle/Application/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/MyApp.app/MyApp.debug.dylib Expected in: <XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX> /usr/lib/swift/libswiftCoreLocation.dylib dyld config: DYLD_LIBRARY_PATH=/usr/lib/system/introspection DYLD_INSERT_LIBRARIES=/usr/lib/libLogRedirect.dylib:/usr/lib/libBacktraceRecording.dylib:/usr/lib/libMainThreadChecker.dylib:/System/Library/PrivateFrameworks/GPUToolsCapture.framework/GPUToolsCapture:/usr/lib/libViewDebuggerSupport.dylib I've already implemented my own Codable conformance for CLLocationCoordinate2D: extension CLLocationCoordinate2D: Codable { // implementation details... } This worked fine on previous visionOS/Xcode versions. Has anyone encountered this issue or found a solution? System details: macOS version: 15.3.2 Xcode version: 16.3 visionOS target: 2.5 Thank you!
2
0
100
May ’25
CustomMaterial disable unlit tone mapping
Hi, since iOS 18 UnlitMaterial and ShaderGraphMaterial have the option to disable tone mapping, e.g via https://developer.apple.com/documentation/realitykit/unlitmaterial/init(applypostprocesstonemap:) Is it possible to do the same for CustomMaterial? I tried initializing a CustomMaterial based on an UnlitMaterial where tone mapping is disabled, like so: let unlitMat = UnlitMaterial(applyPostProcessToneMap: false) let customMaterial = try CustomMaterial( from: unlitMat, surfaceShader: surfaceShader, geometryModifier: geometryModifier ) but that does not seem to work. The colors of my texture still look altered in comparison to a plain UnlitMaterial or a ShaderGraphMaterial where its disabled. Any hints? Thank you!
1
0
118
Jun ’25
Shared/GroupImmersive Space – Query Local Device Transform
Hi, I am in the process of implementing SharePlay into our app. The shared experience opens an Immersive Space and we set systemCoordinator.configuration.supportsGroupImmersiveSpace = true Now visionOS establishes a shared coordinate space for the immersive space. From the docs: To achieve consistent positioning of RealityKit entities across multiple devices in an immersive space during a SharePlay session There are cases where we want to position content in front of the user (independent of the shared session, and for each user individually). Normally to do that we use the transform retrieved via worldTrackingProvider.queryDeviceAnchor.originFromAnchorTransform to position content in front of the user (plus some Z Offset and smooth interpolation). This works fine in non-SharePlay instances and the device transform is where I would expect it to be but during the FaceTime call deviceAnchor.originFromAnchorTransform seems to use the shared origin of the immersive space and then I end up with a transform that might be offset. Here is a video of the issue in action: https://streamable.com/205r2p The blue rect is place using AnchorEntity(.head, trackingMode: .continuous). This works regardless of the call and the entity is always placed based on the head position. The green rect is adjusted on every frame using the transform I get from worldTrackingProvider.queryDeviceAnchor. As you can see it's offset. Is there any way I can query query this transform locally for the user during a FaceTime call? Also I would like to know if it's possible to disable this automatic entity transform syncing behavior? Setting entity.synchronization = nil results in the entity not showing up at all. https://developer.apple.com/documentation/realitykit/synchronizationcomponent Is SynchronizationComponent only relevant for the legacy MultiPeerConnectivity approach? Thank you!
2
0
270
Oct ’25
SPM Traits not working correctly with multiple targets in same Xcode workspace (App + App Clip)
Hi, I've just migrated to Swift Tools 6.2 and package traits, and I'm encountering an issue when using traits with multiple targets in the same Xcode workspace. Setup: Main iOS app target App Clip target Both consume the same local packages (e.g., UIComponents) What I'm trying to achieve: Main app imports packages without the COMPACT_BUILD trait App Clip imports packages with the COMPACT_BUILD trait enabled Package configuration (simplified): // UIComponents/Package.swift let package = Package( name: "UIComponents", platforms: [.iOS(.v18)], traits: [ .trait(name: "COMPACT_BUILD", description: "Minimal build for App Clips"), ], // ... targets: [ .target( name: "UIComponents", dependencies: [...], swiftSettings: [ .define("COMPACT_BUILD", .when(traits: ["COMPACT_BUILD"])), ] ), ] ) In the code: #if COMPACT_BUILD // Excluded from App Clip #endif The consumer packages: Main app's package imports without trait: .package(path: "../UIComponents") App Clip's package imports with trait: .package(path: "../UIComponents", traits: ["COMPACT_BUILD"]) The problem: When building the main app target, the COMPACT_BUILD compiler condition is unexpectedly active — even though the main app's dependency chain never enables that trait. It seems like the trait enabled by the App Clip target is "leaking" into the main app build. I confirmed this by adding #error("COMPACT_BUILD is active") — it triggers when building the main app, which shouldn't happen. If I disable the App Clip target from the build scheme, the main app builds correctly with COMPACT_BUILD not defined. I am also able to build the App Clip separately. Environment: Xcode 26.2 swift-tools-version: 6.2 iOS 26.2 Questions: Is this expected behavior with Xcode's SPM integration? Are traits resolved workspace-wide rather than per-target? Is there a workaround to have different trait configurations for different targets consuming the same package? Or do I need to fall back to separate package targets (e.g., UIComponents and UIComponentsCompact) to achieve this? Any guidance would be appreciated. Thanks!
0
0
34
3d
Set camera feed as texture input for CustomMaterial
Hello, in this project https://developer.apple.com/documentation/arkit/content_anchors/tracking_and_visualizing_faces there is some sample code that describes how to map the camera feed to an object with SceneKit and a shader modifier. I would like know if there is an easy way to achieve the same thing with a CustomMaterial and RealityKit 2. Specifically I'm interested in what would be the best way to pass in the background of the RealityKit environment as a texture to the custom shader. In SceneKit this was really easy as one could just do the following: material.diffuse.contents = sceneView.scene.background.contents As the texture input for custom material requires a TextureResource I would probably need a way to create a CGImage from the background or camera feed on the fly. What I've tried so far is accessing the captured image from the camera feed and creating a CGImage from the pixel buffer like so: guard     let frame = arView.session.currentFrame,     let cameraFeedTexture = CGImage.create(pixelBuffer: frame.capturedImage),     let textureResource = try? TextureResource.generate(from: cameraFeedTexture, withName: "cameraFeedTexture", options: .init(semantic: .color)) else {     return } // assign texture customMaterial.custom.texture = .init(textureResource) extension CGImage {   public static func create(pixelBuffer: CVPixelBuffer) -> CGImage? {     var cgImage: CGImage?     VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)     return cgImage   } } This seems wasteful though and is also quite slow. Is there any other way to accomplish this efficiently or would I need to go the post processing route? In the sample code the displayTransform for the view is also being passed as a SCNMatrix4. CustomMaterial custom.value only accepts a SIMD4 though. Is there another way to pass in the matrix? Another idea I've had was to create a CustomMaterial from an OcclusionMaterial which already seems to contain information about the camera feed but so far had no luck with it. Thanks for the support!
8
0
3.9k
Aug ’21