Yeah I've managed to implement the Camera Control gestures for controlling exposure and zoom so far. Seems to work well. Is there any work around so that if the device is unlocked I can just open my app? Or at least show a loading screen? It's a shame that my users will miss out on this.
I'm not sure this would be possible as I'm using React Native for the UI. Is there any reason you can't just launch the app in the same way the action button works with a "open an app"? This would have been so much quicker to implement, even if my app was already using SwiftUI. I'm only able to work on this project in my spare time and it feels a bit anti "indie" developer to have it this way.
Yeah I understand. Thanks for your response. One last Q, would it be acceptable to make an extension that has a button "Unlock to access"? I assume most of my users have an unlocked phone any way in which case it would just open the app normally I believe?
Thanks. I tried to enable "dump intermediates" but I couldn't get any to show, or any timings. Here's the output from the xcode log on start up:
CI_PRINT_TREE options flags:
1 initial graph
2 optimized graph
4 program graph (set)
dump-inputs
dump-intermediates (set)
dump-raw-intermediates (set)
dump-bmtl-intermediates (set)
dump-outputs
dump-timing (set)
context==<name|number>
context!=<name|number>
frame-
<dot|pdf|png>
Thank you so much! I have a follow up question - how do you configure "AVCapturePhotoSettings" to capture the HDRGainMap? I have an iPhone 16 Pro which using the default camera app produces images that have embedded HDRGainMaps, so my hardware is capable. However using the code above, the CIImage returned is "nil".
Thank you, that resolved the issue. I can now access the gain map on my iPhone 16 Pro, however it's still nil on my iPhone 13 (iOS 18.0.1). Images captured via the iPhone Camera on the 13 do have a gain map, though it appears different, with dark colours, as though it is a different format. Is there any documentation on this? Thank you.
Here are the two gain maps I have accessed from taking phots on the default camera app on the two devices. However like I explained, I can't seem to access the 13s HDRGainMap from an AVCapturePhoto using the above method
https://gist.github.com/kiding/fa4876ab4ddc797e3f18c71b3c2eeb3a?permalink_comment_id=5398481#gistcomment-5398481
Interesting. Thanks for the link, however I'm not sure I want to defer the processing as this would make the time from capture => processing longer?
I've also just tried enabling this and when maxPhotoDimensions is set to 48MP it just sends 12MP images to the deferred method, not sure why!
Yes of course, here is the project: https://github.com/alexfoxy/CIPerformanceTest
The image is always the same, and you'll see the CIContext setup in the code above!
Thanks!
Yeh I did wonder that. Any other tips on measuring memory usage with core image? For context I'm processing 12/24MP images, with lots of complex core image chains. I'm re-writing some as Metal shaders so I want to see if I'm improving things!
Yeah I've managed to implement the Camera Control gestures for controlling exposure and zoom so far. Seems to work well. Is there any work around so that if the device is unlocked I can just open my app? Or at least show a loading screen? It's a shame that my users will miss out on this.
I'm not sure this would be possible as I'm using React Native for the UI. Is there any reason you can't just launch the app in the same way the action button works with a "open an app"? This would have been so much quicker to implement, even if my app was already using SwiftUI. I'm only able to work on this project in my spare time and it feels a bit anti "indie" developer to have it this way.
Yeah I understand. Thanks for your response. One last Q, would it be acceptable to make an extension that has a button "Unlock to access"? I assume most of my users have an unlocked phone any way in which case it would just open the app normally I believe?
Thanks. I tried to enable "dump intermediates" but I couldn't get any to show, or any timings. Here's the output from the xcode log on start up:
CI_PRINT_TREE options flags:
1 initial graph
2 optimized graph
4 program graph (set)
dump-inputs
dump-intermediates (set)
dump-raw-intermediates (set)
dump-bmtl-intermediates (set)
dump-outputs
dump-timing (set)
context==<name|number>
context!=<name|number>
frame-
<dot|pdf|png>
Thank you so much! I have a follow up question - how do you configure "AVCapturePhotoSettings" to capture the HDRGainMap? I have an iPhone 16 Pro which using the default camera app produces images that have embedded HDRGainMaps, so my hardware is capable. However using the code above, the CIImage returned is "nil".
Thank you, that resolved the issue. I can now access the gain map on my iPhone 16 Pro, however it's still nil on my iPhone 13 (iOS 18.0.1). Images captured via the iPhone Camera on the 13 do have a gain map, though it appears different, with dark colours, as though it is a different format. Is there any documentation on this? Thank you.
Here are the two gain maps I have accessed from taking phots on the default camera app on the two devices. However like I explained, I can't seem to access the 13s HDRGainMap from an AVCapturePhoto using the above method
https://gist.github.com/kiding/fa4876ab4ddc797e3f18c71b3c2eeb3a?permalink_comment_id=5398481#gistcomment-5398481
Interesting. Thanks for the link, however I'm not sure I want to defer the processing as this would make the time from capture => processing longer?
I've also just tried enabling this and when maxPhotoDimensions is set to 48MP it just sends 12MP images to the deferred method, not sure why!
Yes of course, here is the project: https://github.com/alexfoxy/CIPerformanceTest
The image is always the same, and you'll see the CIContext setup in the code above!
Thanks!
Yeh I did wonder that. Any other tips on measuring memory usage with core image? For context I'm processing 12/24MP images, with lots of complex core image chains. I'm re-writing some as Metal shaders so I want to see if I'm improving things!