Post

Replies

Boosts

Views

Activity

Reply to CIColorCube sometimes producing no or broken output in macOS 13
It turns out the problem was caused by how we loaded the cube data. Previously, we did it like this: let cubeImage: CGImage = ... // render cube image into a 32-bit float context, since that's the data format needed by CIColorCube let pixelData = UnsafeMutablePointer<simd_float4>.allocate(capacity: cubeImage.width * cubeImage.height) let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.floatComponents.rawValue | CGBitmapInfo.byteOrder32Little.rawValue let colorSpace = cubeImage.colorSpace ?? CGColorSpace.sRGBColorSpace guard let bitmapContext = CGContext(data: pixelData, width: cubeImage.width, height: cubeImage.height, bitsPerComponent: MemoryLayout<simd_float4.Scalar>.size * 8, bytesPerRow: MemoryLayout<simd_float4>.size * cubeImage.width, space: colorSpace, bitmapInfo: bitmapInfo) else { assertionFailure("Failed to create bitmap context for conversion") } bitmapContext.draw(cubeImage, in: CGRect(x: 0, y: 0, width: cubeImage.width, height: cubeImage.height)) let data = Data(bytesNoCopy: pixelData, count: bitmapContext.bytesPerRow * bitmapContext.height, deallocator: .free) // pass data to filter Note that we pre-allocated the pixelData buffer and gave it to the CGContext to render the cube image into it. It seems that data was corrupted or released too early in some cases, causing the erroneous behavior described above, even though we assumed that Data(bytesNoCopy:...) would take ownership of the data. To fix this, we let CGContext create its own buffer and copy the cube data after the draw: let cubeImage: CGImage = ... // render cube image into a 32-bit float context, since that's the data format needed by CIColorCube let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.floatComponents.rawValue | CGBitmapInfo.byteOrder32Little.rawValue let colorSpace = cubeImage.colorSpace ?? CGColorSpace.sRGBColorSpace guard let bitmapContext = CGContext(data: nil, width: cubeImage.width, height: cubeImage.height, bitsPerComponent: MemoryLayout<simd_float4.Scalar>.size * 8, bytesPerRow: MemoryLayout<simd_float4>.size * cubeImage.width, space: colorSpace, bitmapInfo: bitmapInfo) else { assertionFailure("Failed to create bitmap context for conversion") } bitmapContext.draw(cubeImage, in: CGRect(x: 0, y: 0, width: cubeImage.width, height: cubeImage.height)) guard let pixelData = bitmapContext.data else { assertionFailure("Failed to get cube data") } let data = Data(bytes: pixelData, count: bitmapContext.bytesPerRow * bitmapContext.height) // pass data to filter
Topic: App & System Services SubTopic: Core OS Tags:
Oct ’22
Reply to [CIRAWFilterImpl semanticSegmentationHairMatte]: unrecognized selector sent to instance
This seems like a bug in the CIRAWFilter implementation. It would be great if you could file a bug report in the Feedback app for that. Thanks! A conceptual note: The CIRAWFilter is meant to be initialized with RAW image data. You are passing it PNG data, which is not what it was designed for. It's a bit surprising that it even works with non-RAW images. If you want to read the auxiliary data embedded in an image, you can instead do the following: let hairMatte = CIImage(contentsOf: imageFileURL, options: [CIImageOption.auxiliarySemanticSegmentationHairMatte: true]) This should work with most CIImage initializers that provide the options parameter. Though I'm not sure if it would work if you load the image with UIImage(named:) as it might strip the auxiliary data on load. Check out CIImageOption for available aux data to load.
Topic: Media Technologies SubTopic: General Tags:
Jan ’23
Reply to CIFilter documentation for CIMaximumComponent?
For Core Image documentation in general, I can recommend cifilter.io, though it does not list the newest filters. You can also check out the Filter Magic app, which lets you play with most CIFilters and has a lot of documentation. As for CIMaximumComponent and CIMinimumComponent: They will take the max/min values of R, G, and B and return a pixel with all channels set to this value. Some examples: RGB(1.0, 0.0, 0.0) -> max: RGB(1.0, 1.0, 1.0) | min: RGB(0.0, 0.0, 0.0) RGB(0.5, 0.7, 0.3) -> max: RGB(0.7, 0.7, 0.7) | min: RGB(0.3, 0.3, 0.3) So yes, they turn the image into grayscale, but I might not be what you want since the value doesn't represent perceived lightness of the color. You might want to check out CIPhotoEffectMono, CIPhotoEffectNoir, and CIPhotoEffectTonal for a more natural grayscale conversions.
Topic: Media Technologies SubTopic: General Tags:
Jan ’23
Reply to EDR doesn't work on iOS?
To enable EDR rendering, all we do is to set the colorPixelFormat to MTLPixelFormatRGBA16Float (note: RGBA, not BGRA) and wantsExtendedDynamicRangeContent to YES. We don't change the colorSpace since it is already set to extended linear sRGB when setting the other properties. As soon as we render pixel values outside [0...1], the screen switches to EDR mode and the potential and current HDR headroom adjust accordingly.
Topic: Graphics & Games SubTopic: Metal Tags:
Mar ’23
Reply to High CPU usage with CoreImage vs Metal
Every time you render a CIImage with a CIContext, CI does a filter graph analysis to determine the best path for rendering the image (determining intermediates, region of interest, kernel concatenation, etc.). This can be quite CPU-intensive. If you only have a few simple operations to perform on your image, and you can easily implement them in Metal directly, you are probably better off using that. However, I would also suggest you file Feedback with the Core Image team and report your findings. We also observe a very heavy CPU load in our apps, caused by Core Image. Maybe they find a way to further optimize the graph analysis – especially for consecutive render calls with the same instructions.
Topic: Graphics & Games SubTopic: General Tags:
Jul ’23