The problem might be this:
Core Image uses 16-bit float RGBA as the default working format. That means that, whenever it needs an intermediate buffer for the rendering, it will create a 4-channel 16-bit float surface to render into. This also meant that your 1-channel unsigned integer values will automatically be mapped to float values in 0.0...1.0. That's probably where you lose precision.
There are a few options to circumvent this:
You could set the workingFormat context option to .L8 or .R8. However, this means all intermediate buffers will have that format. If you want to mix processing of the segmentation mask with other images, this won't work. If you only want to process the mask separately, you can set up a separate CIContext with this option. Note, however, that most built-in CIFilters assume a floating-point working format and might not perform well with this format.
You can process your segmentation map with Metal (as you suggested) as part of your CIFilter pipeline using a CIImageProcessorKernel. For the kernel, you can set the formatForInput(...) and the outputFormat to .R8. This should tell CI that it doesn't need to convert the segmentation mask before passing it to your processor kernel. In the process method, you can access the input's Metal texture and perform custom Metal processing with it, rendering into the output's texture (which is then also R8 format). This way, you won't lose any precision.
I think the second option is the best choice here as you get the best of both worlds (custom Metal processing + CI integration).
Tip: You can always use CI_PRINT_TREE to check the format of the intermediate buffers CI is using during rendering.
Topic:
Machine Learning & AI
SubTopic:
General
Tags: