CIRAWFilter.outputImage first-time cost is huge (~3s), subsequent calls are ~3ms. Any official way to pre-initialize RAW pipeline (without taking a real photo)?

Hi Apple Developer Forums,

I’m developing an iOS camera app that processes RAW captures using Core Image. I’m seeing a large “first use” performance penalty specifically when creating the CIImage from CIRAWFilter.outputImage.

What’s slow (important detail)

I’m measuring the time for:

let rawFilter = CIRAWFilter(imageData: rawData, identifierHint: hint)
let ciImage = rawFilter.outputImage

This is not CIContext.render(...) / createCGImage(...). It’s just the time to access outputImage (i.e., building the Core Image graph / RAW pipeline setup).

Observed behavior

First time accessing CIRAWFilter.outputImage: ~3 seconds

Second time (same app session, similar RAW): ~3 milliseconds

So something heavy is happening only on first use (decoder initialization, pipeline setup, shader/library compilation, caching, etc.).

Using Metal System Trace, I also noticed that during the slow first call there are many “Create MTLLibrary” events, while the second call doesn’t show this pattern.

Warm-up attempts using bundled DNG

I tried to “warm up” early (e.g., on camera screen entry) by loading a bundled DNG and then accessing CIRAWFilter.outputImage by taking a photo:

Warm-up with a ~247 KB DNG → first real RAW outputImage cost drops to ~1.42s

Warm-up with a ~25 MB DNG → first real RAW outputImage cost drops to ~843ms

This helps, but it’s still far from the steady-state ~3ms.

Warm-up by capturing a real RAW (works, but concerns)

The only method that fully eliminates the delay is to trigger a real RAW capture programmatically before the user’s first photo, then use that captured rawData to warm up the CIRAWFilter.outputImage path. This brings the first user-facing capture close to the steady-state timing.

However:

  • In some regions, the camera shutter sound cannot be suppressed, so “hidden warm-up capture” is unacceptable UX.

  • I’m also unsure whether triggering a real capture without an explicit user action could raise compliance/privacy concerns, even if the image is immediately discarded and never saved/uploaded.

Questions

  • Is the large first-time cost of CIRAWFilter.outputImage expected (RAW pipeline initialization / shader compilation)?

  • Is there an Apple-recommended way to pre-initialize the Core Image RAW pipeline / Metal resources so the first outputImage is fast, without taking a real photo?

  • Are there any best practices (e.g. CIContext creation timing, prepareRender(...), specific options) that reliably reduce this first-use overhead for CIRAWFilter?

Attachments

Figure 1: First RAW capture with no warm-up (~3s outputImage time)

Figure 2: First RAW capture after warm-up with bundled DNG (improved but still hundreds of ms)

Thanks for any guidance or experience sharing!

Is the DNG you are using for the warm-up one that was captured on iPhone? I.e., does it have the same dimensions as the ones captured by the user later?

CIRAWFilter.outputImage first-time cost is huge (~3s), subsequent calls are ~3ms. Any official way to pre-initialize RAW pipeline (without taking a real photo)?
 
 
Q