Post

Replies

Boosts

Views

Activity

Reply to Set camera feed as texture input for CustomMaterial
Yeah it's getting there but I can't seem to figure out what the last missing step is. The conversion from YCbCr values to sRGB is performed as described here: https://developer.apple.com/documentation/arkit/arframe/2867984-capturedimage So I guess there is one last conversion that is missing. That srgbToLinear method described above brings it close but darkens too much. As that conversion matrix from the docs already states that it converts into sRGB would I even need to care about rec709 at all? Okay thank you, it would be great if they have another suggestion or tip.
Topic: Graphics & Games SubTopic: RealityKit Tags:
Jul ’21
Reply to Set camera feed as texture input for CustomMaterial
Alright, so I tried adjusting the pixel format from rgba8Unorm to rgba8Unorm_srgb but that didn't make much of a difference. From what I've read it seems that the issue is related to the gamma correction and RealityKit as well as SceneKit render in linear color space? To work around this I tried converting the color in the fragment shader like so: /* This conversion method is copied from section 7.7.7 of the Metal Language Spec:( https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf ) */ static float srgbToLinear(float c) {     if (c <= 0.04045)         return c / 12.92;     else         return powr((c + 0.055) / 1.055, 2.4); } [[visible]] void cameraMappingSurfaceShader(realitykit::surface_parameters params) {     auto surface = params.surface();     float2 uv = params.geometry().uv0();     half4 color = params.textures().custom().sample(samplerBilinear, uv);     half3 finalColor = color.rgb;     finalColor.r = srgbToLinear(finalColor.r);     finalColor.g = srgbToLinear(finalColor.g);     finalColor.b = srgbToLinear(finalColor.b);     surface.set_emissive_color(finalColor); } The result looks a lot better and pretty close, but is still slightly darker than the background camera feed of the ARView. Just as a test I tried adjust the exposure a little and got quite close with this setting: arView.environment.background = .cameraFeed(exposureCompensation: -0.35) But that is of course a workaround I'd like to avoid. Attached is an image of how it currently looks. Also could you give me a hint how I'd do the encoding of the matrix into a texture? Could I write it into a CGImage and pass that as the texture resource? I inspected the display transform and it seems there are only a couple relevant parameters so I've tried the following: // this uses a simd_float4x4 matrix retrieved via ARFrame.frame.displayTransform(…         let encodedDisplayTransform: SIMD4<Float> = .init(             x: displayTransform.columns.0.x,             y: displayTransform.columns.0.y,             z: displayTransform.columns.3.x,             w: displayTransform.columns.3.y         )         customDrawableMaterial.custom.value = encodedDisplayTransform // put remaining values into unused material parameters         customDrawableMaterial.metallic.scale = displayTransform.columns.1.x         customDrawableMaterial.roughness.scale = displayTransform.columns.1.y and then I reconstruct the matrix within the geometry modifier.
Topic: Graphics & Games SubTopic: RealityKit Tags:
Jul ’21
Reply to Set camera feed as texture input for CustomMaterial
Hi, so I dug a little bit into DrawableQueue and got it working. Seems really powerful and there is apparently no measure performance hit. However I've got one little issue: the rendered texture of my drawable looks a little too bright or oversaturated. I assume this is some kind of color mapping issue? My setup looks like the following: First I setup my DrawableQueue let descriptor = TextureResource.DrawableQueue.Descriptor(     pixelFormat: .rgba8Unorm,     width: 1440,     height: 1440,     usage: .unknown,     mipmapsMode: .none ) … let queue = try TextureResource.DrawableQueue(descriptor) Next I setup the MTLRenderPipelineDescriptor and MTLRenderPipelineDescriptor:         let pipelineDescriptor = MTLRenderPipelineDescriptor()         pipelineDescriptor.sampleCount = 1         pipelineDescriptor.colorAttachments[0].pixelFormat = .rgba8Unorm         pipelineDescriptor.depthAttachmentPixelFormat = .invalid … and then at each frame I convert the currents frame pixelbuffer to a MTLTexture like in the ARKit with Metal Xcode sample.         guard             let drawable = try? drawableQueue.nextDrawable(),             let commandBuffer = commandQueue?.makeCommandBuffer(),             let renderPipelineState = renderPipelineState,             let frame = arView?.session.currentFrame         else {             return         }         // update vertex coordinates with display transform         updateImagePlane(frame: frame)         let pixelBuffer = frame.capturedImage         // convert captured image into metal textures         guard             !(CVPixelBufferGetPlaneCount(pixelBuffer) < 2),             let capturedImageTextureY = createTexture(                 fromPixelBuffer: pixelBuffer,                 pixelFormat: .r8Unorm,                 planeIndex: 0             ),             let capturedImageTextureCbCr = createTexture(                 fromPixelBuffer: pixelBuffer,                 pixelFormat: .rg8Unorm,                 planeIndex: 1             )         else {             return         }         let renderPassDescriptor = MTLRenderPassDescriptor()         renderPassDescriptor.colorAttachments[0].texture = drawable.texture         renderPassDescriptor.colorAttachments[0].loadAction = .load         renderPassDescriptor.colorAttachments[0].storeAction = .store         renderPassDescriptor.renderTargetHeight = textureResource.width         renderPassDescriptor.renderTargetWidth = textureResource.height         guard let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) else {             return         }         renderEncoder.pushDebugGroup("DrawCapturedImage")         renderEncoder.setCullMode(.none)         renderEncoder.setRenderPipelineState(renderPipelineState)         renderEncoder.setVertexBuffer(imagePlaneVertexBuffer, offset: 0, index: 0)         renderEncoder.setFragmentTexture(capturedImageTextureY, index: 1)         renderEncoder.setFragmentTexture(capturedImageTextureCbCr, index: 2)         renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4)         renderEncoder.endEncoding()                  commandBuffer.present(drawable)         commandBuffer.commit() in the fragment shader of my quadmapped texture I perform the ycbcrToRGBTransform. Then finally in my CustomMaterial fragment shader I just sample the texture and display it: [[visible]] void cameraMappingSurfaceShader(realitykit::surface_parameters params) {     auto surface = params.surface();     float2 uv = params.geometry().uv0();     // Flip uvs vertically.     uv.y = 1.0 - uv.y;     half4 color = params.textures().custom().sample(samplerBilinear, uv);     surface.set_emissive_color(color.rgb); } Almost everything looks fine, it's just a slight difference in brightness. Do I maybe need to work with a different pixel format? As a test I also used a simple image, loaded it as a texture resource and then replaced it via DrawableQueue and metal texture with the same image. This gave me similar results (too bright). The encoding of the display transform matrix will be the next step, but for now I'd like to get this working properly. Thanks for any help!
Topic: Graphics & Games SubTopic: RealityKit Tags:
Jul ’21