Is there documentation describing the semantics of a Metal CIKernel function?
I have image data where each pixel is a signed 16-bit integer. I need to convert that into any number of color values, starting with a simple shift from signed to unsigned (e.g. the data in one image ranges from about -8,000 to +20,000, and I want to simply add 8,000 to each pixel's value).
I've got a basic filter working, but it treats the pixel values as floating point, I think. I've tried using both sample_t and sample_h types in my kernel, and simple arithmetic:
extern "C"
coreimage::sample_h
heightShader(coreimage::sample_h inS, coreimage::destination inDest)
{
coreimage::sample_h r = inS + 0.1;
return r;
}
This has an effect, but I don't really know what's in inS. Is it a vector of four float16? What are the minimum and maximum values? They seem to be clamped to 1.0 (and perhaps -1.0). Well, I’ve told CI that my input image is CIFormat.L16, which is 16-bit luminance, so I imagine it's interpreting the bits as unsigned? Anyway, where is this documented, if anywhere (the correspondence between input image pixel format and the actual values that get passed to a filter kernel)?
Is there a type that lets me work on the integer values? This document - https://developer.apple.com/metal/MetalCIKLReference6.pdf implies that I can only work with floating-point values. But it doesn't tell me how they're mapped.
Any help would be appreciated. Thanks.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I’m writing an app that, among other things, displays very large images (e.g. 106,694 x 53,347 pixels). These are GeoTIFF images, in this case containing digital elevation data for a whole planet. I will eventually need to be able to draw polygons on the displayed image.
There was a time when one would use CATiledLayer, but I wonder what is best today. I started this app in Swift/Cocoa, but I'm toying with the idea of starting over in SwiftUI (my biggest hesitation is that I have yet to upgrade to Big Sur).
The image data I have is in strips, with an integral number of image rows per strip. Strips are not guaranteed to be contiguous in the file. Pixel formats vary, but in the motivating use case are 16 bits per pixel, with the values signifying meters. As a first approximation, I can simply display these values in a 16 bpp grayscale image.
Is the right thing to do to set up a CoreImage pipeline? As I understand it that should give me some automatic memory management, right?
I’m hoping to find out the best approach before I spend a lot of time going down the wrong path.
I have some code calling this method:
mutating
func
readOffset() UInt64
{
let offset: UInt64
debugLog("readOffset")
switch (self.formatVersion)
{
case .v42:
let offset32: UInt32 = self.reader.get()
offset = UInt64(offset32)
case .v43:
offset = self.reader.get()
}
return offset
}
If I put a breakpoint on the switch statement, Xcode never stops there, and if the debugLog() call is commented out, I can't even step into the function at the call site; it just runs to the next breakpoint in my code, wherever that happens to be.
If I put the breakpoint on debugLog(), it stops at the breakpoint.
If I put breakpoints at the self.reader.get() calls, it stops at those breakpoints AND I can step into it.
This is a unit test targeting macOS, and optimization is -Onone.
Xcode 12.4 (12D4e) on Catalina 10.15.7 (19H524).