Core Image has the concept of Region of Interest (ROI) that allows for nice optimizations during processing. For instance, if a filtered image is cropped before rendering, Core Image can tell the filters to only process that cropped region of the image. This means no pixels are processed that would be discarded by the cropping.
Here is an example:
let blurred = ciImage.applyingGaussianBlur(sigma: 5)
let cropped = blurred.cropped(to: CGRect(x: 100, y: 100, width: 200, height: 200))
First, we apply a gaussian blur filter to the whole image, then we crop to a smaller rect. The corresponding filter graph looks like this:
Even though the extent of the image is rather large, the ROI of the crop is propagated back to the filter so that it only processes the pixel within the rendered region.
Now to my problem: Core Image can also cache intermediate results of a filter chain. In fact, it does that automatically. This improves performance when, for example, only changing the parameter of a filter in the middle of the chain and rendering again. Then everything before that filter doesn't change, so a cached intermediate result can be used.
CI also has a mechanism for explicitly defining such caching point by using insertingIntermediate(cache: true). But I noticed that this doesn't play nicely together with propagating ROI.
For example, if I change the example above like this:
let blurred = ciImage.applyingGaussianBlur(sigma: 5)
let cached = blurred.instertingIntermediate(cache: true)
let cropped = cached.cropped(to: CGRect(x: 100, y: 100, width: 200, height: 200))
the filter graph looks like this:
As you can see, the blur filter suddenly wants to process the whole image, regardless of the cropping that happens afterward. The inserted cached intermediate always requires the whole input image as its ROI.
I found this a bit confusing. It prevents us from inserting explicit caching points into our pipeline since we also support non-destructive cropping using the abovementioned method. Performance is too low, and memory consumption is too high when processing all those unneeded pixels.
Is there a way to insert an explicit caching point into the pipeline that correctly propagates the ROI?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
A few of our users reported that images saved with our apps disappear from their library in Photos after a few seconds. All of them own a Mac with an old version of macOS, and all of them have iCloud syncing enabled for Photos.
Our apps use Core Image to process images. Core Image will transfer most of the input's metadata to the output. While we thought this was generally a good idea, this seems to be causing the issue:
The old version of Photos (or even iPhoto?) that is running on the Mac seems to think that the output image of our app is a duplicate of the original image that was loaded into our app. As soon as the iCloud sync happens, the Mac removes the image from the library, even when it's in sleep mode. When the Mac is turned off or disconnected from the internet, the images stay in the library—until the Mac comes back online.
This seems to be caused by the output's metadata, but we couldn't figure out what fields are causing the old Photos to detect the new image as duplicate. It's also very hard to reproduce without installing an old macOS on some machine.
Does anyone know what metadata field we need to change to not be considered a duplicate?
We recently had to change our MLModel's architecture to include custom layers, which means the model can't run on the Neural Engine anymore.
After the change, we observed a lot of crashes being reported on A13 devices. It turns out that the memory consumption when running the prediction with the new model on the GPU is much higher than before, when it was running on the Neural Engine. Before, the peak memory load was ~350 MB, now it spikes over 2 GB, leading to a crash most of the time. This only seems to happen on the A13. When forcing the model to only run on the CPU, the memory consumption is still high, but the same as running the old model on the CPU (~750 MB peak). All tested on iOS 16.1.2.
We profiled the process in Instruments and found that there are a lot of memory buffers allocated by Core ML that are not freed after the prediction.
The allocation stack trace for those buffers is the following:
We ran the same model on a different device and found the same buffers in Instruments, but there they are only 4 KB in size. It seems, Core ML is somehow massively over-allocating memory when run on the A13 GPU.
So far we limit the model to only run on CPU for those devices, but this is far from ideal. Is there any other model setting or workaround that we can use to avoid this issue?
Is there a way to observe the currentEDRHeadroom property of UIScreen for changes? KVO is not working for this property...
I understand that I can query the current headroom in the draw(...) method to adapt the rendering. However, our apps only render on-demand when the user changes parameters. But we would also like to re-render when the current EDR headroom changes to adapt the tone mapping to the new environment.
The only solution we've found so far is to continuously query the screen for changes, which doesn't seem ideal. It would be better if the property would be observable via KVO or if there would be a system notification to listen for.
Thanks!
When using the heif10Representation and writeHEIF10Representation APIs of CIContext, the resulting image doesn’t contain an alpha channel.
When using the heifRepresentation and writeHEIFRepresentation APIs, the alpha channel is properly preserved, i.e., the resulting HEIC will contain a urn:mpeg:hevc:2015:auxid:1 auxiliary image. This image is missing when exporting as HEIF10.
Is this a bug or is this intentional? If I understand the spec correctly, HEIF10 should be able to support alpha via auxiliary image (like HEIF8).
When initializing a CIColor with a dynamic UIColor (like the system colors that resolve differently based on light/dark mode) on macOS 14 (Mac Catalyst), the resulting CIColor is invalid/uninitialized. For instance:
po CIColor(color: UIColor.systemGray2)
→ <uninitialized>
po CIColor(color: UIColor.systemGray2.resolvedColor(with: .current))
→ <CIColor 0x60000339afd0 (0.388235 0.388235 0.4 1) ExtendedSRGB>
But also, not all colors work even when resolved:
po CIColor(color: UIColor.systemGray.resolvedColor(with: .current))
→ <uninitialized>
I think this is caused by the color space of the resulting UIColor:
po UIColor.systemGray.resolvedColor(with: .current)
→ kCGColorSpaceModelRGB 0.596078 0.596078 0.615686 1
po UIColor.systemGray2.resolvedColor(with: .current)
→ UIExtendedSRGBColorSpace 0.388235 0.388235 0.4 1
This worked correctly before in macOS 13.
Tips presented using the popoverTip view modifier can't be styled using other tip view modifiers (as of beta 8).
For instance, the last two modifiers don't have any effect here:
Image(systemName: "wand.and.stars")
.popoverTip(tip)
.tipBackground(.red)
.tipCornerRadius(30)
It will look like this:
Whereas applying the same modifiers to a TipView changes its look:
TipView(tip, arrowEdge: .bottom)
.tipBackground(.red)
.tipCornerRadius(30)
Is this intended behavior? How can we change the appearance of popup tips?
Is it possible to get the camera intrinsic matrix for a captured single photo on iOS?
I know that one can get the cameraCalibrationData from a AVCapturePhoto, which also contains the intrinsicMatrix. However, this is only provided when using a constituent (i.e. multi-camera) capture device and setting virtualDeviceConstituentPhotoDeliveryEnabledDevices to multiple devices (or enabling isDualCameraDualPhotoDeliveryEnabled on older iOS versions). Then photoOutput(_:didFinishProcessingPhoto:) is called multiple times, delivering one photo for each camera specified. Those then contain the calibration data.
As far as I know, there is no way to get the calibration data for a normal, single-camera photo capture.
I also found that one can set isCameraIntrinsicMatrixDeliveryEnabled on a capture connection that leads to a AVCaptureVideoDataOutput. The buffers that arrive at the delegate of that output then contain the intrinsic matrix via the kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix metadata. However, this requires adding another output to the capture session, which feels quite wasteful just for getting this piece of metadata. Also, I would somehow need to figure out which buffer was temporarily closest to when the actual photo was taken.
Is there a better, simpler way for getting the camera intrinsic matrix for a single photo capture?
If not, is there a way to calculate the matrix based on the image's metadata?
With iOS 18, TipKit got explicit support for syncing tip state via iCloud.
However, before that, TipKit already did iCloud syncing implicitly, as far as I know.
How does the new explicit syncing relate to the previous mechanism? Do we have to enable iCloud syncing manually now to retain the functionality in iOS 18? Is there a way to sync with the state that was already stored by TipKit in iCloud on iOS 17?
The new .photos AssistantSchema for intents allow integrating App Intents for Photos-related actions with Apple Intelligence. I was wondering if it would be possible to create intents that do not require full library access.
Our app supports loading image from Photos via the PHPicker, which doesn't require any user permission. Now we want to support the .photos.openAsset schema in an app intent to allow interactions like "Open this image in BeCasso and apply preset X".
Would that be possible without full library access?
There seems to be an issue in iOS 18 / macOS 15 related to image thumbnail generation and/or HEIC.
We are transcoding JPEG images to HEIC when they are loaded into our app (HEIC has a much lower memory footprint when loaded by Core Image, for some reason). We use Image I/O for that:
guard let source = CGImageSourceCreateWithURL(inputURL, nil),
let destination = CGImageDestinationCreateWithURL(outputURL, UTType.heic.identifier as CFString, 1, nil) else {
throw <error>
}
let primaryImageIndex = CGImageSourceGetPrimaryImageIndex(source)
CGImageDestinationAddImageFromSource(destination, source, primaryImageIndex, nil)
When we use CGImageDestinationAddImageFromSource, we get the following warnings on the console:
createImage:1445: *** ERROR: bad image size (0 x 0) rb: 0
CGImageSourceCreateThumbnailAtIndex:5195: *** ERROR: CGImageSourceCreateThumbnailAtIndex[0] - 'HJPG' - failed to create thumbnail [-67] {alw:-1, abs: 1 tra:-1 max:4620}
writeImageAtIndex:1025: ⭕️ ERROR: '<app>' is trying to save an opaque image (4620x3466) with 'AlphaPremulLast'. This would unnecessarily increase the file size and will double (!!!) the required memory when decoding the image --> ignoring alpha.
It seems that CGImageDestinationAddImageFromSource is trying to extract/create a thumbnail, which fails somehow.
I re-wrote the last part like this:
guard let primaryImage = CGImageSourceCreateImageAtIndex(source, primaryImageIndex, nil),
let properties = CGImageSourceCopyPropertiesAtIndex(source, primaryImageIndex, nil) else {
throw <error>
}
CGImageDestinationAddImage(destination, primaryImage, properties)
This doesn't cause any warnings.
An issue that might be related has been reported here.
I've also heard from others having issues with CGImageSourceCreateThumbnailAtIndex.
When using the imagePlaygroundSheet modifier in SwiftUI, the system presets an image playground in a fixed size. Especially on macOS, this modal is rather small and doesn't utilize the available space efficiently.
Is there a way to make the modal bigger, or allow the user to resize the dialog? I tried presentationDetents, but this would need to be applied to the content of the sheet, which is provided by the system...
I guess this question applies to other system-provided sheets like the photo picker as well.
One of our users reported a very strange bug where our app freezes and eventually crashes on some screen transitions.
From different crash logs we could determine that the app freezes up when we call view.layoutIfNeeded() for animating constraint changes. It then gets killed by the watchdog 10 seconds later:
Exception Type: EXC_CRASH (SIGKILL)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Termination Reason: FRONTBOARD 2343432205
<RBSTerminateContext| domain:10 code:0x8BADF00D explanation:scene-update watchdog transgression: app<bundleID(2A01F261-3554-44C0-B5A9-EBEB446484AD)>:6921 exhausted real (wall clock) time allowance of 10.00 seconds
ProcessVisibility: Background
ProcessState: Running
WatchdogEvent: scene-update
WatchdogVisibility: Background
WatchdogCPUStatistics: (
"Elapsed total CPU time (seconds): 24.320 (user 18.860, system 5.460), 29% CPU",
"Elapsed application CPU time (seconds): 10.630, 12% CPU"
) reportType:CrashLog maxTerminationResistance:Interactive>
The crash stack trace looks slightly different, depending on the UI transition that is happening. Here are the two we observed so far. Both are triggered by the layoutIfNeeded() call.
Thread 0 name: Dispatch queue: com.apple.main-thread
Thread 0 Crashed:
0 CoreAutoLayout 0x1b09f90e4 -[NSISEngine valueForEngineVar:] + 8
1 UIKitCore 0x18f919478 -[_UIViewLayoutEngineRelativeAlignmentRectOriginCache origin] + 372
2 UIKitCore 0x18f918f18 -[UIView _nsis_center:bounds:inEngine:forLayoutGuide:] + 1372
3 UIKitCore 0x18f908e9c -[UIView(Geometry) _applyISEngineLayoutValuesToBoundsOnly:] + 248
4 UIKitCore 0x18f9089e0 -[UIView(Geometry) _resizeWithOldSuperviewSize:] + 148
5 CoreFoundation 0x18d0cd6a4 __NSARRAY_IS_CALLING_OUT_TO_A_BLOCK__ + 24
6 CoreFoundation 0x18d0cd584 -[__NSArrayM enumerateObjectsWithOptions:usingBlock:] + 432
7 UIKitCore 0x18f8e62b0 -[UIView(Geometry) resizeSubviewsWithOldSize:] + 128
8 UIKitCore 0x18f977194 -[UIView(AdditionalLayoutSupport) _is_layout] + 124
9 UIKitCore 0x18f976c2c -[UIView _updateConstraintsAsNecessaryAndApplyLayoutFromEngine] + 800
10 UIKitCore 0x18f903944 -[UIView(CALayerDelegate) layoutSublayersOfLayer:] + 2728
11 QuartzCore 0x18ec15498 CA::Layer::layout_if_needed(CA::Transaction*) + 496
12 UIKitCore 0x18f940c10 -[UIView(Hierarchy) layoutBelowIfNeeded] + 312
Thread 0 name: Dispatch queue: com.apple.main-thread
Thread 0 Crashed:
0 QuartzCore 0x18ec2cfe0 -[CALayer animationForKey:] + 176
1 UIKitCore 0x18fa5b258 UniqueAnimationKeyForLayer + 192
2 UIKitCore 0x18fa5ab7c __67-[_UIViewAdditiveAnimationAction runActionForKey:object:arguments:]_block_invoke_2 + 468
3 UIKitCore 0x18fa5ba5c -[_UIViewAdditiveAnimationAction runActionForKey:object:arguments:] + 1968
4 QuartzCore 0x18eb9e938 CA::Layer::set_bounds(CA::Rect const&, bool) + 428
5 QuartzCore 0x18eb9e760 -[CALayer setBounds:] + 132
6 UIKitCore 0x18f941770 -[UIView _backing_setBounds:] + 64
7 UIKitCore 0x18f940404 -[UIView(Geometry) setBounds:] + 340
8 UIKitCore 0x18f908f84 -[UIView(Geometry) _applyISEngineLayoutValuesToBoundsOnly:] + 480
9 UIKitCore 0x18f9089e0 -[UIView(Geometry) _resizeWithOldSuperviewSize:] + 148
10 CoreFoundation 0x18d0cd6a4 __NSARRAY_IS_CALLING_OUT_TO_A_BLOCK__ + 24
11 CoreFoundation 0x18d132488 -[__NSSingleObjectArrayI enumerateObjectsWithOptions:usingBlock:] + 92
12 UIKitCore 0x18f8e62b0 -[UIView(Geometry) resizeSubviewsWithOldSize:] + 128
13 UIKitCore 0x18f977194 -[UIView(AdditionalLayoutSupport) _is_layout] + 124
14 UIKitCore 0x18f976c2c -[UIView _updateConstraintsAsNecessaryAndApplyLayoutFromEngine] + 800
15 UIKitCore 0x18f916258 -[UIView(Hierarchy) layoutSubviews] + 204
16 UIKitCore 0x18f903814 -[UIView(CALayerDelegate) layoutSublayersOfLayer:] + 2424
17 QuartzCore 0x18ec15498 CA::Layer::layout_if_needed(CA::Transaction*) + 496
18 UIKitCore 0x18f940c10 -[UIView(Hierarchy) layoutBelowIfNeeded] + 312
So far, we only know of one iPad Air M1 where this is happening. But we don't know how many users experience this issue without reporting it.
Does anyone know what could cause Auto Layout or Core Animation to block in those calls? We have no clue so far...