Currently, since this project is a work-in-progress, only a single execution of the image pipeline executes. During the execution, theMTLCaptureManager captures the execution of the command buffer. There is no loop: it executes exactly once, and its execution is analyzed. Within the execution of the image processing pipeline, this is the only spot where the GPU-CPU synchronization occurs with the shared event. The shared event resource, as well as the other resources in the pipeline, are created before the creation of the command buffer. The resources used in the pipeline are all tracked by Metal (hazardTrackingMode = .tracked) (though I hope to change this in the future and use heaps for more efficiency)
Here is a brief overview of how the code is organized:
preloadResources()
// 1. Let CoreImage render the CGImage into the metal texture
let commandBufferDescriptor = /// ... enable `encoderExecutionStatus` to capture errors
let ciCommandBuffer = commandQueue..makeCommandBuffer(descriptor: commandBufferDescriptor)
let ciSourceImage = CIImage(cgImage: sourceImage)
ciContext.render(ciSourceImage,
to: sourceImageTexture,
commandBuffer: ciCommandBuffer,
bounds: sourceImageTexture.bounds2D,
colorSpace: CGColorSpaceCreateDeviceRGB())
ciCommandBuffer.commit()
// 2. Do the rest of the image processing
let commandBuffer = commandQueue.makeCommandBuffer(descriptor: commandBufferDescriptor)!
try imageProcessorA.encode(commandBuffer: commandBuffer,
sourceTexture: sourceImageTexture,
destinationTexture: sourceImageIntermediateTexture)
try imageProcessorA.encode(commandBuffer: curveDetectionCommandBuffer,
sourceTexture: sourceImageIntermediateTexture,
destinationTexture: destinationImageTexture)
commandBuffer.commit()
imageProcessorA contains kernelA and kernelB and performs the synchronization as described above.
I suppose I could schedule a technical review session with an engineer to provide more details of the project if more context is needed to resolve the problem.
Topic:
Programming Languages
SubTopic:
Swift
Tags: