Post

Replies

Boosts

Views

Activity

Reply to Core Image for depth maps & segmentation masks: numeric fidelity issues when rendering CIImage to CVPixelBuffer (looking for Architecture suggestions)
The problem might be this: Core Image uses 16-bit float RGBA as the default working format. That means that, whenever it needs an intermediate buffer for the rendering, it will create a 4-channel 16-bit float surface to render into. This also meant that your 1-channel unsigned integer values will automatically be mapped to float values in 0.0...1.0. That's probably where you lose precision. There are a few options to circumvent this: You could set the workingFormat context option to .L8 or .R8. However, this means all intermediate buffers will have that format. If you want to mix processing of the segmentation mask with other images, this won't work. If you only want to process the mask separately, you can set up a separate CIContext with this option. Note, however, that most built-in CIFilters assume a floating-point working format and might not perform well with this format. You can process your segmentation map with Metal (as you suggested) as part of your CIFilter pipeline using a CIImageProcessorKernel. For the kernel, you can set the formatForInput(...) and the outputFormat to .R8. This should tell CI that it doesn't need to convert the segmentation mask before passing it to your processor kernel. In the process method, you can access the input's Metal texture and perform custom Metal processing with it, rendering into the output's texture (which is then also R8 format). This way, you won't lose any precision. I think the second option is the best choice here as you get the best of both worlds (custom Metal processing + CI integration). Tip: You can always use CI_PRINT_TREE to check the format of the intermediate buffers CI is using during rendering.
Topic: Machine Learning & AI SubTopic: General Tags:
3w
Reply to BGContinuedProcessingTask GPU access — no iPhone support?
Good question! I haven't seen the new GPU background entitlements. Good to know! My guess is that they don't want to support iPhone here because they prioritize battery over expensive processing on the mobile devices. The iPads on the other hand are more targeted towards pro use cases (and have a bigger battery). What we are currently doing in our apps is to prevent the screen from locking during video export (using UIApplication.shared.isIdleTimerDisabled = true). We also pause the processing when the app is backgrounded and resume when it's active again.
Topic: Graphics & Games SubTopic: Metal Tags:
3w
Reply to develop app in Europe using image playground
You can easily test Apple Intelligence features in the EU on an iPad. You just have to set the system and Siri language to English (US) and sign in with a US Apple account under iCloud → Media & Purchases. It even works with sandbox accounts! So if you don’t have a US account, you can easily create a sandbox one in App Store Connect. Also note that Apple Intelligence is available on macOS in Europe without the need for a US Apple account. You just have to set the languages to English.
Jan ’25
Reply to Core Image for depth maps & segmentation masks: numeric fidelity issues when rendering CIImage to CVPixelBuffer (looking for Architecture suggestions)
The problem might be this: Core Image uses 16-bit float RGBA as the default working format. That means that, whenever it needs an intermediate buffer for the rendering, it will create a 4-channel 16-bit float surface to render into. This also meant that your 1-channel unsigned integer values will automatically be mapped to float values in 0.0...1.0. That's probably where you lose precision. There are a few options to circumvent this: You could set the workingFormat context option to .L8 or .R8. However, this means all intermediate buffers will have that format. If you want to mix processing of the segmentation mask with other images, this won't work. If you only want to process the mask separately, you can set up a separate CIContext with this option. Note, however, that most built-in CIFilters assume a floating-point working format and might not perform well with this format. You can process your segmentation map with Metal (as you suggested) as part of your CIFilter pipeline using a CIImageProcessorKernel. For the kernel, you can set the formatForInput(...) and the outputFormat to .R8. This should tell CI that it doesn't need to convert the segmentation mask before passing it to your processor kernel. In the process method, you can access the input's Metal texture and perform custom Metal processing with it, rendering into the output's texture (which is then also R8 format). This way, you won't lose any precision. I think the second option is the best choice here as you get the best of both worlds (custom Metal processing + CI integration). Tip: You can always use CI_PRINT_TREE to check the format of the intermediate buffers CI is using during rendering.
Topic: Machine Learning & AI SubTopic: General Tags:
Replies
Boosts
Views
Activity
3w
Reply to BGContinuedProcessingTask GPU access — no iPhone support?
Good question! I haven't seen the new GPU background entitlements. Good to know! My guess is that they don't want to support iPhone here because they prioritize battery over expensive processing on the mobile devices. The iPads on the other hand are more targeted towards pro use cases (and have a bigger battery). What we are currently doing in our apps is to prevent the screen from locking during video export (using UIApplication.shared.isIdleTimerDisabled = true). We also pause the processing when the app is backgrounded and resume when it's active again.
Topic: Graphics & Games SubTopic: Metal Tags:
Replies
Boosts
Views
Activity
3w
Reply to Core Image for depth maps & segmentation masks: numeric fidelity issues when rendering CIImage to CVPixelBuffer (looking for Architecture suggestions)
What is the pixel format of the CVPixelBuffer in question?
Topic: Machine Learning & AI SubTopic: General Tags:
Replies
Boosts
Views
Activity
Feb ’26
Reply to CIRAWFilter.outputImage first-time cost is huge (~3s), subsequent calls are ~3ms. Any official way to pre-initialize RAW pipeline (without taking a real photo)?
Is the DNG you are using for the warm-up one that was captured on iPhone? I.e., does it have the same dimensions as the ones captured by the user later?
Replies
Boosts
Views
Activity
Dec ’25
Reply to CoreImage memory build up on real device but not on simulator
Did you always process the same image? How is your CIContext set up? Do you maybe have a small sample project we can use to test this? Thanks!
Replies
Boosts
Views
Activity
Nov ’25
Reply to I am not using push notification service, i am only use local notification in my app how to solve this issue
We are suddenly getting the same warning, and we are pretty sure that we (or any library) don't use said API. Are there any other APIs that trigger the warning? Was that list recently extended?
Replies
Boosts
Views
Activity
Apr ’25
Reply to Images with unusual color spaces not correctly loaded by Core Image
Also filed as FB17081255, including a sample project. Thanks for looking into this!
Replies
Boosts
Views
Activity
Apr ’25
Reply to Broken behavior for TipKit on iOS 18 that blocks the interface
Did you find the root cause and/or a solution for the problem? I can't reproduce the issue myself, but we have users reporting freezes of our apps that might be related to this issue.
Topic: UI Frameworks SubTopic: SwiftUI Tags:
Replies
Boosts
Views
Activity
Mar ’25
Reply to How to analyse CPU usage with Core Image?
What do you mean by "calling back to the CPU" exactly? The main CPU load of Core Image actually comes from the filter graph optimization CI does before the actual rendering. And unfortunately there is not much you can do to speed this up.
Replies
Boosts
Views
Activity
Mar ’25
Reply to Capturing & Processing ProRaw 48MP images is slow
There is a newish API for deferred photo processing, which makes the whole capture process seem much faster. That's probably what the Camera app is doing under the hood. Check out this WWDC session for details.
Replies
Boosts
Views
Activity
Mar ’25
Reply to develop app in Europe using image playground
You can easily test Apple Intelligence features in the EU on an iPad. You just have to set the system and Siri language to English (US) and sign in with a US Apple account under iCloud → Media & Purchases. It even works with sandbox accounts! So if you don’t have a US account, you can easily create a sandbox one in App Store Connect. Also note that Apple Intelligence is available on macOS in Europe without the need for a US Apple account. You just have to set the languages to English.
Replies
Boosts
Views
Activity
Jan ’25
Reply to Resize Image Playground sheet
It turns out this issue is most notable for Mac Catalyst apps that use the Mac Idiom. We built a small package that provides a better alternative to the system default Image Playground: BetterImagePlayground. See screenshots on GitHub for comparison.
Replies
Boosts
Views
Activity
Jan ’25
Reply to Apple Intelligence On Europe
More countries will be supported in April. Regardless, Apple Intelligence is only be available on iPhone 15 Pro and newer.
Replies
Boosts
Views
Activity
Jan ’25
Reply to sourceImageURL in imagePlaygroundSheet isn't optional
There is also an API that has an optional soureImage instead of the URL: imagePlaygroundSheet(isPresented:concepts:sourceImage:onCompletion:onCancellation:)
Topic: UI Frameworks SubTopic: SwiftUI Tags:
Replies
Boosts
Views
Activity
Jan ’25
Reply to Resize Image Playground sheet
Feedback filed: FB16090123
Replies
Boosts
Views
Activity
Dec ’24