Adjusting L16 pixel format values in custom CIFilter/CIKernel

Is there documentation describing the semantics of a Metal CIKernel function?

I have image data where each pixel is a signed 16-bit integer. I need to convert that into any number of color values, starting with a simple shift from signed to unsigned (e.g. the data in one image ranges from about -8,000 to +20,000, and I want to simply add 8,000 to each pixel's value).

I've got a basic filter working, but it treats the pixel values as floating point, I think. I've tried using both sample_t and sample_h types in my kernel, and simple arithmetic:

Code Block
extern "C"
coreimage::sample_h
heightShader(coreimage::sample_h inS, coreimage::destination inDest)
{
coreimage::sample_h r = inS + 0.1;
return r;
}


This has an effect, but I don't really know what's in inS. Is it a vector of four float16? What are the minimum and maximum values? They seem to be clamped to 1.0 (and perhaps -1.0). Well, I’ve told CI that my input image is CIFormat.L16, which is 16-bit luminance, so I imagine it's interpreting the bits as unsigned? Anyway, where is this documented, if anywhere (the correspondence between input image pixel format and the actual values that get passed to a filter kernel)?

Is there a type that lets me work on the integer values? This document implies that I can only work with floating-point values. But it doesn't tell me how they're mapped.

Any help would be appreciated. Thanks.
Adjusting L16 pixel format values in custom CIFilter/CIKernel
 
 
Q