Post

Replies

Boosts

Views

Activity

Reply to Set camera feed as texture input for CustomMaterial
Hi, so I dug a little bit into DrawableQueue and got it working. Seems really powerful and there is apparently no measure performance hit. However I've got one little issue: the rendered texture of my drawable looks a little too bright or oversaturated. I assume this is some kind of color mapping issue? My setup looks like the following: First I setup my DrawableQueue let descriptor = TextureResource.DrawableQueue.Descriptor(     pixelFormat: .rgba8Unorm,     width: 1440,     height: 1440,     usage: .unknown,     mipmapsMode: .none ) … let queue = try TextureResource.DrawableQueue(descriptor) Next I setup the MTLRenderPipelineDescriptor and MTLRenderPipelineDescriptor:         let pipelineDescriptor = MTLRenderPipelineDescriptor()         pipelineDescriptor.sampleCount = 1         pipelineDescriptor.colorAttachments[0].pixelFormat = .rgba8Unorm         pipelineDescriptor.depthAttachmentPixelFormat = .invalid … and then at each frame I convert the currents frame pixelbuffer to a MTLTexture like in the ARKit with Metal Xcode sample.         guard             let drawable = try? drawableQueue.nextDrawable(),             let commandBuffer = commandQueue?.makeCommandBuffer(),             let renderPipelineState = renderPipelineState,             let frame = arView?.session.currentFrame         else {             return         }         // update vertex coordinates with display transform         updateImagePlane(frame: frame)         let pixelBuffer = frame.capturedImage         // convert captured image into metal textures         guard             !(CVPixelBufferGetPlaneCount(pixelBuffer) < 2),             let capturedImageTextureY = createTexture(                 fromPixelBuffer: pixelBuffer,                 pixelFormat: .r8Unorm,                 planeIndex: 0             ),             let capturedImageTextureCbCr = createTexture(                 fromPixelBuffer: pixelBuffer,                 pixelFormat: .rg8Unorm,                 planeIndex: 1             )         else {             return         }         let renderPassDescriptor = MTLRenderPassDescriptor()         renderPassDescriptor.colorAttachments[0].texture = drawable.texture         renderPassDescriptor.colorAttachments[0].loadAction = .load         renderPassDescriptor.colorAttachments[0].storeAction = .store         renderPassDescriptor.renderTargetHeight = textureResource.width         renderPassDescriptor.renderTargetWidth = textureResource.height         guard let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) else {             return         }         renderEncoder.pushDebugGroup("DrawCapturedImage")         renderEncoder.setCullMode(.none)         renderEncoder.setRenderPipelineState(renderPipelineState)         renderEncoder.setVertexBuffer(imagePlaneVertexBuffer, offset: 0, index: 0)         renderEncoder.setFragmentTexture(capturedImageTextureY, index: 1)         renderEncoder.setFragmentTexture(capturedImageTextureCbCr, index: 2)         renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4)         renderEncoder.endEncoding()                  commandBuffer.present(drawable)         commandBuffer.commit() in the fragment shader of my quadmapped texture I perform the ycbcrToRGBTransform. Then finally in my CustomMaterial fragment shader I just sample the texture and display it: [[visible]] void cameraMappingSurfaceShader(realitykit::surface_parameters params) {     auto surface = params.surface();     float2 uv = params.geometry().uv0();     // Flip uvs vertically.     uv.y = 1.0 - uv.y;     half4 color = params.textures().custom().sample(samplerBilinear, uv);     surface.set_emissive_color(color.rgb); } Almost everything looks fine, it's just a slight difference in brightness. Do I maybe need to work with a different pixel format? As a test I also used a simple image, loaded it as a texture resource and then replaced it via DrawableQueue and metal texture with the same image. This gave me similar results (too bright). The encoding of the display transform matrix will be the next step, but for now I'd like to get this working properly. Thanks for any help!
Topic: Graphics & Games SubTopic: RealityKit Tags:
Jul ’21
Reply to Set camera feed as texture input for CustomMaterial
Alright, so I tried adjusting the pixel format from rgba8Unorm to rgba8Unorm_srgb but that didn't make much of a difference. From what I've read it seems that the issue is related to the gamma correction and RealityKit as well as SceneKit render in linear color space? To work around this I tried converting the color in the fragment shader like so: /* This conversion method is copied from section 7.7.7 of the Metal Language Spec:( https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf ) */ static float srgbToLinear(float c) {     if (c <= 0.04045)         return c / 12.92;     else         return powr((c + 0.055) / 1.055, 2.4); } [[visible]] void cameraMappingSurfaceShader(realitykit::surface_parameters params) {     auto surface = params.surface();     float2 uv = params.geometry().uv0();     half4 color = params.textures().custom().sample(samplerBilinear, uv);     half3 finalColor = color.rgb;     finalColor.r = srgbToLinear(finalColor.r);     finalColor.g = srgbToLinear(finalColor.g);     finalColor.b = srgbToLinear(finalColor.b);     surface.set_emissive_color(finalColor); } The result looks a lot better and pretty close, but is still slightly darker than the background camera feed of the ARView. Just as a test I tried adjust the exposure a little and got quite close with this setting: arView.environment.background = .cameraFeed(exposureCompensation: -0.35) But that is of course a workaround I'd like to avoid. Attached is an image of how it currently looks. Also could you give me a hint how I'd do the encoding of the matrix into a texture? Could I write it into a CGImage and pass that as the texture resource? I inspected the display transform and it seems there are only a couple relevant parameters so I've tried the following: // this uses a simd_float4x4 matrix retrieved via ARFrame.frame.displayTransform(…         let encodedDisplayTransform: SIMD4<Float> = .init(             x: displayTransform.columns.0.x,             y: displayTransform.columns.0.y,             z: displayTransform.columns.3.x,             w: displayTransform.columns.3.y         )         customDrawableMaterial.custom.value = encodedDisplayTransform // put remaining values into unused material parameters         customDrawableMaterial.metallic.scale = displayTransform.columns.1.x         customDrawableMaterial.roughness.scale = displayTransform.columns.1.y and then I reconstruct the matrix within the geometry modifier.
Topic: Graphics & Games SubTopic: RealityKit Tags:
Jul ’21
Reply to Set camera feed as texture input for CustomMaterial
Yeah it's getting there but I can't seem to figure out what the last missing step is. The conversion from YCbCr values to sRGB is performed as described here: https://developer.apple.com/documentation/arkit/arframe/2867984-capturedimage So I guess there is one last conversion that is missing. That srgbToLinear method described above brings it close but darkens too much. As that conversion matrix from the docs already states that it converts into sRGB would I even need to care about rec709 at all? Okay thank you, it would be great if they have another suggestion or tip.
Topic: Graphics & Games SubTopic: RealityKit Tags:
Jul ’21
Reply to Transparent blending with gradient texture leads to color banding (RealityKit 2)
Thanks for the tip but unfortunately that does not make any difference. I've already tried a lot of variants. Should I create a sample project? I'm starting to think this issue might be exclusive to CustomMaterial. Based on these docs: https://developer.apple.com/metal/Metal-RealityKit-APIs.pdf […] If there are two entities, one in front of the other, for example, and the closer entity is translucent, both of those entity’s fragments contribute to the final color of that one pixel. I feel like this should work? Otherwise if I stay with the first variant, do you have any idea how I could get rid of the gradient banding?
Topic: Graphics & Games SubTopic: RealityKit Tags:
Sep ’21
Reply to Transparent blending with gradient texture leads to color banding (RealityKit 2)
Alright, I'm an idiot. So apparently you need to explicitly set the blending property also on your CustomMaterial to make it work, which kinda makes sense but wasn't clear for me right away. So for example this: guard var customMaterial = try? CustomMaterial(…) else { return } customMaterial.blending = .transparent(opacity: .init(floatLiteral: 1)) // .transparent is important! – floatliteral can be whatever customMaterial.faceCulling = .none customMaterial.emissiveColor = .init(color: .red) works fine with proper blending of faces behind the object. Whereas if you don't specify the blending the default is .opaque and then if you would set the opacity in your fragment shader like so: surface.set_opacity(0); the CustomMaterial kinda works similar to an Occlusion Material from my testing and hides everything behind it. What I haven't found out yet is how I can disabling the color banding when setting a black and white gradient texture as the blending texture. Will investigate and maybe file feedback for that!
Topic: Graphics & Games SubTopic: RealityKit Tags:
Sep ’21
Reply to RealityKit AnimationEvents question
If I understood your question correctly something like this should work: var targetTransform = entityToAnimate.transform targetTransform.translation.x += 0.4 let animationPlaybackController = entityToAnimate.move(to: targetTransform, relativeTo: entityToAnimate.parent, duration: 3, timingFunction: .easeOut) arView.scene.subscribe(to: AnimationEvents.PlaybackCompleted.self, on: entityToAnimate) { [weak self] event in     if event.playbackController == animationPlaybackController {         print("Animation completed!")     } } .store(in: &subscriptions)
Topic: Graphics & Games SubTopic: RealityKit Tags:
Nov ’21
Reply to How can i add a UIView on the material of the ModelEntity?
Hi, is your UIView static or does it change a lot? If it's just a static view you could create a snapshot, extract it's CGImage and convert that into a TextureResource (https://developer.apple.com/documentation/realitykit/textureresource/3768851-generate) that could be mapped on whatever model you want. In case it is animated I assume you could render that view with your desired update interval into a Metal Texture and then use the DrawableQueue API to update your material with the new texture: https://developer.apple.com/documentation/realitykit/textureresource/drawablequeue Maybe this GitHub sample I've done could help you with the latter option: https://github.com/arthurschiller/realitykit-drawable-queue
Topic: Graphics & Games SubTopic: RealityKit Tags:
Nov ’21
Reply to Get coordinates of pivot point in ModelEntity
Hi, as far as I know there is currently no way to retrieve the pivot point directly. What you can do though is to query the bounding box and then calculate a Y-Offset based on the min and max positions of the bounding boxes corners and set that as the position of the entity. Now when you wrap it into another empty parent entity, you should have the desired effect. public extension Entity {     enum PivotPosition {         case top         case center         case bottom     }     func wrapEntityAndSetPivotPosition(to targetPosition: PivotPosition) -> Entity {         setPivotPosition(to: targetPosition, animated: false)         let entity = Entity()         entity.addChild(self)         return entity     }     func setPivotPosition(to targetPosition: PivotPosition, animated: Bool = false) {         let boundingBox = visualBounds(relativeTo: nil)         let min = boundingBox.min         let max = boundingBox.max         let yTranslation: Float         switch targetPosition {         case .top:             yTranslation = -max.y         case .center:             yTranslation = -(min.y + (max.y - min.y) / 2)         case .bottom:             yTranslation = -min.y         }         let targetPosition = simd_float3(             x: boundingBox.center.x * -1,             y: yTranslation,             z: boundingBox.center.z * -1         )         guard animated else {             position = targetPosition             return         }         guard isAnchored, parent != nil else {             print("Warning: to set the Entities pivot position animated make sure it is already anchored and has a parent set.")             return         }         var translationTransform = transform         translationTransform.translation = targetPosition         move(to: translationTransform, relativeTo: parent, duration: 0.3, timingFunction: .easeOut)     } } And the whole thing in action:         let boxModel = ModelEntity(mesh: .generateBox(size: 0.3))         let wrappedBoxEntity = boxModel.wrapEntityAndSetPivotPosition(to: .bottom)         let boxAnchor = AnchorEntity(plane: .horizontal)         boxAnchor.addChild(wrappedBoxEntity)         arView.scene.anchors.append(boxAnchor) I agree though that there should be a dedicated API for this – as we have in SceneKit. I filed feedback a while ago.
Topic: Graphics & Games SubTopic: RealityKit Tags:
Apr ’22