Post

Replies

Boosts

Views

Activity

How can I get CALayer to render sublayers correctly when generating PDF ?
I am trying to get an image drawn of a CALayer containing a number of sublayers positioned at specific points, but at the moment it does not honour the zPosition of the sublayers when I use CALayer.render(in ctx:). It works fine on screen but when rendering to PDF it seems to render them in the order they were created. These sublayers that are positioned(x, y, angle) on the drawing layer. One solution seems to be to override the render(in ctx:) method on the drawing layer, which seems to work except the rendering of the sublayers is in the incorrect position. They are all in the bottom left corner (0,0) and not rotated correctly. } If I don't override this method then they are positioned correctly but just in the wrong zPosition - i.e. ones that should be at the bottom (zPosition-0) are at the top. What am I missing here ? It seems I need to position the sublayers correctly somehow in the render(incts:) function? How do I do this ? These sublayers have already been positioned on screen and all I am trying to do is generate an image of the drawing. This is done using the following function. }
2
0
1.8k
May ’21
Bug with CIContext.writeHEIFRepresentation() API
I get a lot of heif images with diagonal lines through them when using this API to generate output files. The same files output in other formats show no signs of the same diagonal lines. This problem only seems to occur when the input CIImage is cropped and then only for certain crop dimensions. Can someone confirm they see the same issue - I have created a test Playground which includes two sample RAW files. You will need to change the path to the input files and also change the crop dimensions to test different combinations. It seems the diagonal lines are only caused by certain width values. See the few crop examples in the Playground. I have logged a bug with Apple (FB9096406) but it would be good to get confirmation this is a bug. I have tried cropping with CICrop filter and CIImage.cropped(to:) APIs and get the same results. Link to a zip file with the test playground for reproducing the issue at the following link - I can't post the actual link. Use https:// to download the file from xxxx://duncangroenewald.com/files/CoreImageHEIFExporterBug.zip I see some others have posted links on this forum, why is it I can't seem to do that?
0
0
904
May ’21
How can I extract the data from the output image of CIAreaHistogramFilter
I would like to get arrays of red, green and blue histogram data from the output of the CIAreaHistogramFilter. My current approach is not working. According to the docs CIAreaHistogramFilter returns an image with width = bin size (256) in my case and height = 1 so each pixel contains the count of the rgb values for that bin. if let areahistogram = self.areaHistogramFilter(ciImage) { let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary var pixelBuffer : CVPixelBuffer? let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(areahistogram.extent.size.width), Int(areahistogram.extent.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer) guard (status == kCVReturnSuccess) else { return } self.hContext.render(areahistogram, to: pixelBuffer!) CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0)); let int32Buffer = unsafeBitCast(CVPixelBufferGetBaseAddress(pixelBuffer!), to: UnsafeMutablePointerUInt32.self) let int32PerRow = CVPixelBufferGetBytesPerRow(pixelBuffer!) var data = [Int]() for i in 0..256 { /* Get BGRA value for pixels */ let BGRA = int32Buffer[i] data.append(Int(BGRA)) let red = (BGRA 16) & 0xFF; let green = (BGRA 8) & 0xFF; let blue = BGRA & 0xFF; os_log("data[\(i)]:\(BGRA) red: \(red) green: \(green) blue: \(blue)") } CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0)) } results in zeros. data[0]:0 red: 0 green: 0 blue: 0 data[1]:0 red: 0 green: 0 blue: 0 ... data[255]:134678783 red: 7 green: 7 blue: 7 similarly produces a bunch of zeros or this let baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer!) let buffer = baseAddress!.assumingMemoryBound(to: UInt8.self) for i in stride(from: 0, to: 256*4, by: 4) { let blue = buffer[i] let green = buffer[i+1] let red = buffer[i+2] os_log("data[\(i)]: red: \(red) green: \(green) blue: \(blue)") } or this variation that seems simpler var red = [UInt8]()             var green = [UInt8]()             var blue = [UInt8]()             for i in 0..256 {                 // Get BGRA value for pixel                 let BGRA = int32Buffer[i]                 withUnsafeBytes(of: BGRA.bigEndian) {                     red.append($0[0])                     green.append($0[1])                     blue.append($0[2])                 }         }
2
0
1.8k
May ’21
What is the correct way to get a RAW files image size
According to the Core Image documents the following API should return the RAW images native output size. let filter = CIFilter(imageURL: url, options: nil) let value = filter.value(forKey: CIRAWFilterOption.outputNativeSize.rawValue ) as? CIVector However this seems to always return the camera's native resolution rather than the actual image size contained in the RAW file. For example if I uses this API on a RAW file that was shot at 16:9 aspect ratio on a Sony 19 the image size should be 6000 x 3376 but this API call returns 6000 x 4000. Is this a bug or am I missing something - is there another API call to get the actual image size ? Note that the EXIF data does contain the correct image size.
1
0
1k
May ’21
How can I get CALayer to render sublayers correctly when generating PDF ?
I am trying to get an image drawn of a CALayer containing a number of sublayers positioned at specific points, but at the moment it does not honour the zPosition of the sublayers when I use CALayer.render(in ctx:). It works fine on screen but when rendering to PDF it seems to render them in the order they were created. These sublayers that are positioned(x, y, angle) on the drawing layer. One solution seems to be to override the render(in ctx:) method on the drawing layer, which seems to work except the rendering of the sublayers is in the incorrect position. They are all in the bottom left corner (0,0) and not rotated correctly. } If I don't override this method then they are positioned correctly but just in the wrong zPosition - i.e. ones that should be at the bottom (zPosition-0) are at the top. What am I missing here ? It seems I need to position the sublayers correctly somehow in the render(incts:) function? How do I do this ? These sublayers have already been positioned on screen and all I am trying to do is generate an image of the drawing. This is done using the following function. }
Replies
2
Boosts
0
Views
1.8k
Activity
May ’21
Help debugging Thread 1: EXC_BAD_ACCESS (code=1, address=0x28) crash
How do I go about debugging this crash. Crash log - https://developer.apple.com/forums/content/attachment/fb4f1046-e867-4c87-ae6f-1d8ce690fdf7
Replies
2
Boosts
0
Views
1.4k
Activity
May ’21
Bug with CIContext.writeHEIFRepresentation() API
I get a lot of heif images with diagonal lines through them when using this API to generate output files. The same files output in other formats show no signs of the same diagonal lines. This problem only seems to occur when the input CIImage is cropped and then only for certain crop dimensions. Can someone confirm they see the same issue - I have created a test Playground which includes two sample RAW files. You will need to change the path to the input files and also change the crop dimensions to test different combinations. It seems the diagonal lines are only caused by certain width values. See the few crop examples in the Playground. I have logged a bug with Apple (FB9096406) but it would be good to get confirmation this is a bug. I have tried cropping with CICrop filter and CIImage.cropped(to:) APIs and get the same results. Link to a zip file with the test playground for reproducing the issue at the following link - I can't post the actual link. Use https:// to download the file from xxxx://duncangroenewald.com/files/CoreImageHEIFExporterBug.zip I see some others have posted links on this forum, why is it I can't seem to do that?
Replies
0
Boosts
0
Views
904
Activity
May ’21
How can I extract the data from the output image of CIAreaHistogramFilter
I would like to get arrays of red, green and blue histogram data from the output of the CIAreaHistogramFilter. My current approach is not working. According to the docs CIAreaHistogramFilter returns an image with width = bin size (256) in my case and height = 1 so each pixel contains the count of the rgb values for that bin. if let areahistogram = self.areaHistogramFilter(ciImage) { let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary var pixelBuffer : CVPixelBuffer? let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(areahistogram.extent.size.width), Int(areahistogram.extent.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer) guard (status == kCVReturnSuccess) else { return } self.hContext.render(areahistogram, to: pixelBuffer!) CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0)); let int32Buffer = unsafeBitCast(CVPixelBufferGetBaseAddress(pixelBuffer!), to: UnsafeMutablePointerUInt32.self) let int32PerRow = CVPixelBufferGetBytesPerRow(pixelBuffer!) var data = [Int]() for i in 0..256 { /* Get BGRA value for pixels */ let BGRA = int32Buffer[i] data.append(Int(BGRA)) let red = (BGRA 16) & 0xFF; let green = (BGRA 8) & 0xFF; let blue = BGRA & 0xFF; os_log("data[\(i)]:\(BGRA) red: \(red) green: \(green) blue: \(blue)") } CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0)) } results in zeros. data[0]:0 red: 0 green: 0 blue: 0 data[1]:0 red: 0 green: 0 blue: 0 ... data[255]:134678783 red: 7 green: 7 blue: 7 similarly produces a bunch of zeros or this let baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer!) let buffer = baseAddress!.assumingMemoryBound(to: UInt8.self) for i in stride(from: 0, to: 256*4, by: 4) { let blue = buffer[i] let green = buffer[i+1] let red = buffer[i+2] os_log("data[\(i)]: red: \(red) green: \(green) blue: \(blue)") } or this variation that seems simpler var red = [UInt8]()             var green = [UInt8]()             var blue = [UInt8]()             for i in 0..256 {                 // Get BGRA value for pixel                 let BGRA = int32Buffer[i]                 withUnsafeBytes(of: BGRA.bigEndian) {                     red.append($0[0])                     green.append($0[1])                     blue.append($0[2])                 }         }
Replies
2
Boosts
0
Views
1.8k
Activity
May ’21
What is the correct way to get a RAW files image size
According to the Core Image documents the following API should return the RAW images native output size. let filter = CIFilter(imageURL: url, options: nil) let value = filter.value(forKey: CIRAWFilterOption.outputNativeSize.rawValue ) as? CIVector However this seems to always return the camera's native resolution rather than the actual image size contained in the RAW file. For example if I uses this API on a RAW file that was shot at 16:9 aspect ratio on a Sony 19 the image size should be 6000 x 3376 but this API call returns 6000 x 4000. Is this a bug or am I missing something - is there another API call to get the actual image size ? Note that the EXIF data does contain the correct image size.
Replies
1
Boosts
0
Views
1k
Activity
May ’21