Post

Replies

Boosts

Views

Activity

Is a Locked Capture Extension allowed to just "open the app" when the device is unlocked?
Hey, Quick question. I noticed that Adobe's new app, Project Indigo, allows you to open the app using the Camera Control button. However, when your device is locked it just shows this screen: Would this normally be approved by the Appstore approval process? I ask because I would like to do something similar with my camera app. I know that this is not the best user experience, but my apps UI is not built in Swift and I don't have the resources to build the UI again. At least this way the user experience would be improved from what it is now, where users cannot even launch the app. I get many requests per week about this feature and would love to improve the UX for my users, even if it's not the best possible. Thanks, Alex
0
0
233
6d
Is a Locked Capture Extension allowed to just "open the app" when the device is unlocked?
Hey, Quick question. I noticed that Adobe's new app, Project Indigo, allows you to open the app using the Camera Control button. However, when your device is locked it just shows this screen: Would this normally be approved by the Appstore approval process? I ask because I would like to do something similar with my camera app. I know that this is not the best user experience, but my apps UI is not built in Swift and I don't have the resources to build the UI again. At least this way the user experience would be improved from what it is now, where users cannot even launch the app. I get many requests per week about this feature and would love to improve the UX for my users, even if it's not the best possible. Thanks, Alex
0
0
145
3w
Capturing & Processing ProRaw 48MP images is slow
Hey, I have a camera app that captures a ProRaw photo and then runs a few Core Image filters before saving it to the device as a HEIC. However I'm finding that capturing at 48MP is rather slow. Testing a minimal pipeline on an iPhone 16 Pro: Shutter press => file received in output: 1.2 ~ 1.6s CIRawFilter created using photo file representation then rendered to context, without any filters: 0.8s ~ 1s Saving to device ~0.15s Is this the expected time for capturing processing? The native camera app seems to save the images within half a second. I'm using QualityPrioritization.balanced and the highest resolution available which is 48MP. Would using the CIRawFilter with the pixelBuffer from the photo output be faster? I tried it but couldn't get it to output an image. Are there any other things I could try to speed this up? Is it possible to capture at 24MP instead? Thanks, Alex
1
1
320
Mar ’25
How to access HDRGainMap from AVCapturePhoto
Hey, I'm building a camera app and I want to use the captured HDRGainMap along side the photo to do some processing with a CIFilter chain. How can this be done? I can't find any documentation any where on this, only on how to access the HDRGainMap from an existing HEIC file, which I have done successfully. For this I'm doing something like the following: let gainmap = CGImageSourceCopyAuxiliaryDataInfoAtIndex(source, 0, kCGImageAuxiliaryDataTypeHDRGainMap) let gainDict = NSDictionary(dictionary: gainmap) let gainData = gainDict[kCGImageAuxiliaryDataInfoData] as? Data let gainDescription = gainDict[kCGImageAuxiliaryDataInfoDataDescription] let gainMeta = gainDict[kCGImageAuxiliaryDataInfoMetadata] However I'm not sure what the approach is with a AVCapturePhoto output from a AVCaptureDevice. Thanks!
2
0
608
Jan ’25
Slow performance decoding large images with Core Image.
I'm building a camera app that does some post processing after the photo has been taken. With 12MP the processing is pretty good, but larger images 24MP is very slow. I created a very simple example to demonstrate the issue, which is loading an image and the rendering it to data. let context = CIContext() let imageUrl = Bundle.main.url(forResource: "12mp", withExtension: "jpg")! let data = try! Data(contentsOf: imageUrl) let ciImage = CIImage(data: data)! let start = CFAbsoluteTimeGetCurrent() let data = context.jpegRepresentation(of: ciImage, colorSpace: context.workingColorSpace!) print(data?.count) print("Resize Completed: " + String(CFAbsoluteTimeGetCurrent() - start)) Running this code on an iPhone 16 Pro with different images produces these benchmarks: 12MP => 0.03s 24MP => 1.22s 48MP => 2.98s I understand that processing time will increase with resolution but it doesn't seem linear. I have tried setting different CiContext options such as .useSoftwareRenderer: false but it has made no difference. From profiling the process it looks like the JPEG decoding is the bottle neck. This is for a 48MP Image: Is there any way this can be improved?
0
0
562
Dec ’24
Launching an app with Camera Control
I've just received my iPhone 16 Pro to develop some of the Camera Control features. I am trying to set up my app to be launched from a button press, and from my research in the documents this is only possible if I develop a LockedCameraCaptureExtension. Is this correct? My app is written in React Native, so to build an extension would require me to re-create the entire UI in Swift which just isn't possible with my resources. Ideally I could build a simple extension that requires Authentication to open the app but I'n not sure that will work: The app extension terminates shortly after launch if it doesn’t have an active camera view that uses AVCaptureEventInteraction to handle events from the hardware buttons, or if access to the camera hasn’t been requested. This is a bit frustrating for something so simple as to just opening an app. Thanks, Alex
1
0
1k
Sep ’24
App crashing due to memory pressure on iPhone 13.. but works fine on iPhone 12, iPhone 11
I have a camera app that has some intensive processing. Each photo can require between 300-500MB of memory to process all the CIFilters, depth blur etc. This has been working fine on my older test devices, iPhone 11 & 12, but I had some crash reports from users and I noticed that they were always iPhone 13 / 13 mini users. After purchasing a 13, I can confirm that after taking 2-3 photos sequentially the app crashes due to memory usage. What I don't understand is that I can take many photos sequentially on the iPhone 11 / 12 and they do not crash. The memory usage is certainly high, but all the images save and the app does not crash. Here's what the memory usage looks like when using the iPhone 11: All the devices have 4GB of RAM, so why should the iPhone 13 not be able to handle it? One option would be to try and reduce the memory usage of the application, but it's a challenge when processing 12MP images. Here's what the memory debugger looks like, not very useful! Any pointers greatly appreciated! Alex
0
0
490
Sep ’24
CIFilter chain failing to render parts of output
I’ve built a iOS camera app that applies many CIFilters to an image captured by the camera. Some of my users have reported that on occasion the images have large parts that are blank, see below: Frustratingly, I can’t reproduce this myself! Does anyone know what could he causing it, is it a memory issue? I haven’t posted the code as there’s a lot to look over and I’m not sure it would help diagnose it. Thanks for any pointers.
1
0
642
Aug ’24
Performant alternative to scaling a CIImage / PixelBuffer
Hey, I’m building a camera app where I am applying real time effects to the view finder. One of those effects is a variable blur, so to improve performance I am scaling down the input image using CIFilter.lanczosScaleTransform(). This works fine and runs at 30FPS, but when running the metal profiler I can see that the scaling transforms use a lot of GPU time, almost as much as the variable blur. Is there a more efficient way to do this? The simplified chain is like this: Scale down viewFinder CVPixelBuffer (CIFilter.lanczosScaleTransform) Scale up depthMap CVPixelBuffer to match viewFinder size (CIFilter.lanczosScaleTransform) Create CIImages from both CVPixelBuffers Apply VariableDepthBlur (CIFilter.maskedVariableBlur) Scale up final image to metal view size (CIFilter.lanczosScaleTransform) Render CIImage to a MTKView using CIRenderDestination From some research, I wonder if scaling the CVPixelBuffer using the accelerate framework would be faster? Also, Instead of scaling the final image, perhaps I could offload this to the metal view? Any pointers greatly appreciated!
2
0
1k
Jul ’24
Improving object separation with live depth data
Hey, I'm building a portrait mode into my camera app but I'm having trouble with matching the quality of Apples native camera implementation. I'm streaming the depth data and applying a CIMaskedVariableBlur to the video stream which works quite well but the definition of the object in focus looks quite bad in some scenarios. See comparison below with Apples UI + depth data. What I don't quite understand is how Apple is able to do such a good cutout around my hand assuming it has similar depth data to what I am receiving. You can see in the depth image that my hand is essentially the same colour as parts of background, and this shows in the blur preview - but Apple gets around this. Does anyone have any ideas? Thanks!
0
0
610
Jun ’24
Documentation for CIDepthBlurEffect "inputShape"
I'm using CIDepthBlurEffect to create a portrait mode effect on a rendered image. The effect is working as expected however I want to create the "bokeh ball" effect which is seen in the photos app. I see that the filter has a "inputShape" input of type NSString, however the documents do not specify what value this should be. Any pointers are help is greatly apprecaited.
0
1
742
Jun ’24