Is there a way to observe the currentEDRHeadroom property of UIScreen for changes? KVO is not working for this property...
I understand that I can query the current headroom in the draw(...) method to adapt the rendering. However, our apps only render on-demand when the user changes parameters. But we would also like to re-render when the current EDR headroom changes to adapt the tone mapping to the new environment.
The only solution we've found so far is to continuously query the screen for changes, which doesn't seem ideal. It would be better if the property would be observable via KVO or if there would be a system notification to listen for.
Thanks!
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
When initializing a CIColor with a dynamic UIColor (like the system colors that resolve differently based on light/dark mode) on macOS 14 (Mac Catalyst), the resulting CIColor is invalid/uninitialized. For instance:
po CIColor(color: UIColor.systemGray2)
→ <uninitialized>
po CIColor(color: UIColor.systemGray2.resolvedColor(with: .current))
→ <CIColor 0x60000339afd0 (0.388235 0.388235 0.4 1) ExtendedSRGB>
But also, not all colors work even when resolved:
po CIColor(color: UIColor.systemGray.resolvedColor(with: .current))
→ <uninitialized>
I think this is caused by the color space of the resulting UIColor:
po UIColor.systemGray.resolvedColor(with: .current)
→ kCGColorSpaceModelRGB 0.596078 0.596078 0.615686 1
po UIColor.systemGray2.resolvedColor(with: .current)
→ UIExtendedSRGBColorSpace 0.388235 0.388235 0.4 1
This worked correctly before in macOS 13.
Tips presented using the popoverTip view modifier can't be styled using other tip view modifiers (as of beta 8).
For instance, the last two modifiers don't have any effect here:
Image(systemName: "wand.and.stars")
.popoverTip(tip)
.tipBackground(.red)
.tipCornerRadius(30)
It will look like this:
Whereas applying the same modifiers to a TipView changes its look:
TipView(tip, arrowEdge: .bottom)
.tipBackground(.red)
.tipCornerRadius(30)
Is this intended behavior? How can we change the appearance of popup tips?
In iOS 17.1 (and 17.2 beta), the arrowEdge parameter of the SwiftUI popoverTip doesn't work anymore.
This code
button
.popoverTip(tip, arrowEdge: .bottom)
looks like this on iOS 17.0
and like this on 17.1 and up.
I checked permittedArrowDirections of the corresponding UIPopoverPresentationController (via the Memory Graph): It's .down on iOS 17.0 and .any (the default) on 17.1. It seems the parameter of popoverTip is not properly propagated to the popover controller anymore.
I have an IOSurface and I want to turn that into a CIImage. However, the constructor of CIImage takes a IOSurfaceRef instead of a IOSurface.
On most platforms, this is not an issue because the two types are toll-free bridgeable... except for Mac Catalyst, where this fails.
I observed the same back in Xcode 13 on macOS. But there I could force-cast the IOSurface to a IOSurfaceRef:
let image = CIImage(ioSurface: surface as! IOSurfaceRef)
This cast fails at runtime on Catalyst.
I found that unsafeBitCast(surface, to: IOSurfaceRef.self) actually works on Catalyst, but it feels very wrong.
Am I missing something? Why aren't the types bridgeable on Catalyst?
Also, there should ideally be an init for CIImage that takes an IOSurface instead of a ref.
When using the imagePlaygroundSheet modifier in SwiftUI, the system presets an image playground in a fixed size. Especially on macOS, this modal is rather small and doesn't utilize the available space efficiently.
Is there a way to make the modal bigger, or allow the user to resize the dialog? I tried presentationDetents, but this would need to be applied to the content of the sheet, which is provided by the system...
I guess this question applies to other system-provided sheets like the photo picker as well.
Some users reported that their images are not loading correctly in our app. After a lot of debugging we identified the following:
This only happens when the app is build for Mac Catalyst. Not on iOS, iPadOS, or “real” macOS (AppKit).
The images in question have unusual color spaces. We observed the issue for uRGB and eciRGB v2.
Those images are rendered correctly in Photos and Preview on all platforms.
When displaying the image inside of a UIImageView or in a SwiftUI Image, they render correctly.
The issue only occurs when loading the image via Core Image.
When comparing the different Core Image render graphs between AppKit (working) and Catalyst (faulty) builds, they look identical—except for the result.
Mac (AppKit):
Catalyst:
Something seems to be off when Core Image tries to load an image with foreign color space in Catalyst.
We identified a workaround: By using a CGImageDestination to transcode the image using the kCGImageDestinationOptimizeColorForSharing option, Image I/O will convert the image to sRGB (or similar) and Core Image is able to load the image correctly. However, one potentially loses fidelity this way.
Or might there be a better workaround?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
Core Image
Core Graphics
We have a lot of users reporting to us that they can't load images into our app. They just "see the spinner spin indefinitely". We now think we found the reason why:
When trying to load an asset via the PHPickerViewController that is not downloaded yet without an active internet connection, the loadFileRepresentationmethod of the item provider will just stall without reporting any progress or error. The timeout for this seems to be 5 minutes, which is way too high.
The same is true if the user disabled cellular data for Photos and attempts to load a cloud asset while not on wifi.
Steps to reproduce:
have a photo in iCloud that is not yet downloaded
activate Airplane Mode
open the picker and select that photo
see when loadFileRepresentation will return
Since it is clear that without an internet connection the asset can’t be downloaded, I would hope to be informed via a delegate method of the picker or the loadFileRepresentation callback that there was an error trying to load the asset. (FB9221090)
Right now we are attempting to solve this by adding an extra timer and a network check. But this will not catch the "no cellular data allowed"-case.
Please consider some callback mechanism to the API so we can inform the user what the problem might be. Thanks!
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
PhotoKit
Cloud and Local Storage
Photos and Imaging
wwdc21-10046
Adding to my previous post, it would be great if the PHPicker would display if an asset is only available in the cloud and would need to be downloaded first. This might give the user a hint that the loading process might take longer and might cause network traffic.
Right now, it's unclear for the user (and for us developers) that an asset needs to be downloaded. A small cloud icon would help a lot, I think. (FB9221095)
Thanks for considering!
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
PhotoKit
Cloud and Local Storage
Photos and Imaging
wwdc21-10046
We are implementing a CIImageProcessorKernel that uses an MTLRenderCommandEncoder to perform some mesh-based rendering into the output’s metalTexture. This works on iOS, but crashes on macOS. This is because the usage of the texture does not specify renderTarget on those devices—but not always. Sometimes the output’s texture can be used as renderTarget, but sometimes not. It seems there are both kinds of textures in CIs internal texture cache, and which one is used depends on the order in which filters are executed.
So far we only observed this on macOS (on different Macs, even on M1 and macOS 12 Beta) but not on iOS (also not on an M1 iPad).
We would expect to always be able to use the output’s texture as render target so we can use it as a color attachment for the render pass.
Is there some way to configure a CIImageProcessorKernel to always get renderTarget output textures? Or do we really need to render into a temporary texture and blit the result into the output texture? This would be a huge waste of memory and time…
I would like to know if there are some best practices for integrating Core ML models into a Core Image pipeline, especially when it comes to support for tiling.
We are using a CIImageProcessorKernel for integrating an MLModel-based filtering step into our filter chain. The wrapping CIFilter that actually calls the kernel handles the scaling of the input image to the size the model input requires.
In the roi(forInput:arguments:outputRect:) method the kernel signals that it always requires the full extent of the input image in order to produce an output (since MLModels don't support tiling).
In the process(with:arguments:output:) method, the kernel is performing the prediction of the model on the input pixel buffer and then copies the result into the output buffer.
This works well until the filter chain is getting more and more complex and input images become larger. At this point, Core Image wants to perform tiling to stay within the memory limits. It can't tile the input image of the kernel since we defined the ROI to be the whole image.
However, it is still calling the process(…) method multiple times, each time demanding a different tile/region of the output to be rendered. But since the model doesn't support producing only a part of the output, we effectively have to process the whole input image again for each output tile that should be produced.
We already tried caching the result of the model run between consecutive calls to process(…). However, we are unable to identify that the next call still belongs to the same rendering call, but for a different tile, instead of being a different rendering entirely, potentially with a different input image.
If we'd have access to the digest that Core Image computes for an image during processing, we would be able to detect if the input changed between calls to process(…). But this is not part of the CIImageProcessorInput.
What is the best practice here to avoid needless reevaluation of the model? How does Apple handle that in their ML-based filters like CIPersonSegmentation?
The ROI callback that is passed to a CIKernel’s apply(…) method seems to be referenced beyond the render call and is not released properly. That also means that any captured state is retained longer than expected.
I noticed this in a camera capture scenario because the capture session stopped delivering new frames after the initial batch. The output ran out of buffers because they were not properly returned to the pool. I was capturing the filter’s input image in the ROI callback like in this simplified case:
override var outputImage: CIImage? {
guard let inputImage = inputImage else { return nil }
let roiCallback: CIKernelROICallback = { _, _ in
return inputImage.extent
}
return Self.kernel.apply(extent: inputImage.extent, roiCallback: roiCallback, arguments: [inputImage])
}
While it is avoidable in this case, it is also very unexpected that the ROI callback is retained longer than needed for rendering the output image. Even when not capturing a lot of state, this would still unnecessarily accumulate over time.
Note that calling ciContext.clearCaches() does actually seem to release the captured ROI callbacks. But I don’t want to do that after every frame since there are also resources worth caching.
Is there a reason why Core Image caches the ROI callbacks beyond the rendering calls they are involved in?
While trying to re-create the CIFilterCam demo shown in the WWDC session, I hit a roadblock when trying to access a hardware camera from inside my extension.
Can I simply use an AVCaptureSession + AVCaptureDeviceInput + AVCaptureVideoDataOutput to get frames from an actual hardware camera and pass them to the extension's stream? If yes, when should I ask for camera access permissions?
It seems the extension code is run as soon as I install the extension, but I never get prompted for access permission. Do I need to set up the capture session lazily? What's the best practice for this use case?
Topic:
App & System Services
SubTopic:
Drivers
Tags:
System Extensions
Camera
Core Media
wwdc2022-10022
Core Image has the concept of Region of Interest (ROI) that allows for nice optimizations during processing. For instance, if a filtered image is cropped before rendering, Core Image can tell the filters to only process that cropped region of the image. This means no pixels are processed that would be discarded by the cropping.
Here is an example:
let blurred = ciImage.applyingGaussianBlur(sigma: 5)
let cropped = blurred.cropped(to: CGRect(x: 100, y: 100, width: 200, height: 200))
First, we apply a gaussian blur filter to the whole image, then we crop to a smaller rect. The corresponding filter graph looks like this:
Even though the extent of the image is rather large, the ROI of the crop is propagated back to the filter so that it only processes the pixel within the rendered region.
Now to my problem: Core Image can also cache intermediate results of a filter chain. In fact, it does that automatically. This improves performance when, for example, only changing the parameter of a filter in the middle of the chain and rendering again. Then everything before that filter doesn't change, so a cached intermediate result can be used.
CI also has a mechanism for explicitly defining such caching point by using insertingIntermediate(cache: true). But I noticed that this doesn't play nicely together with propagating ROI.
For example, if I change the example above like this:
let blurred = ciImage.applyingGaussianBlur(sigma: 5)
let cached = blurred.instertingIntermediate(cache: true)
let cropped = cached.cropped(to: CGRect(x: 100, y: 100, width: 200, height: 200))
the filter graph looks like this:
As you can see, the blur filter suddenly wants to process the whole image, regardless of the cropping that happens afterward. The inserted cached intermediate always requires the whole input image as its ROI.
I found this a bit confusing. It prevents us from inserting explicit caching points into our pipeline since we also support non-destructive cropping using the abovementioned method. Performance is too low, and memory consumption is too high when processing all those unneeded pixels.
Is there a way to insert an explicit caching point into the pipeline that correctly propagates the ROI?
I'm processing a 4K video with a complex Core Image pipeline that also invokes a neural style transfer Core ML model. This works very well, but sometimes, for very few frames, the model execution fails with the following error messages:
Execution of the command buffer was aborted due to an error during execution. Internal Error (0000000e:Internal Error)
Error: command buffer exited with error status.
The Metal Performance Shaders operations encoded on it may not have completed.
Error:
(null)
Internal Error (0000000e:Internal Error)
<CaptureMTLCommandBuffer: 0x280b95d90> -> <AGXG15FamilyCommandBuffer: 0x108f143c0>
label = <none>
device = <AGXG15Device: 0x106034e00>
name = Apple A16 GPU
commandQueue = <AGXG15FamilyCommandQueue: 0x1206cee40>
label = <none>
device = <AGXG15Device: 0x106034e00>
name = Apple A16 GPU
retainedReferences = 1
[espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": Internal Error (0000000e:Internal Error); code=1 status=-1
[coreml] Error computing NN outputs -1
[coreml] Failure in -executePlan:error:.
It's really hard to reproduce it since it only happens occasionally. I also didn't find a way to access that Internal Error mentioned, so I don't know the real reason why it fails.
Any advice would be appreciated!