Post

Replies

Boosts

Views

Activity

arrowEdge of popoverTip not working anymore on iOS 17.1
In iOS 17.1 (and 17.2 beta), the arrowEdge parameter of the SwiftUI popoverTip doesn't work anymore. This code button .popoverTip(tip, arrowEdge: .bottom) looks like this on iOS 17.0 and like this on 17.1 and up. I checked permittedArrowDirections of the corresponding UIPopoverPresentationController (via the Memory Graph): It's .down on iOS 17.0 and .any (the default) on 17.1. It seems the parameter of popoverTip is not properly propagated to the popover controller anymore.
2
1
1.8k
Jul ’24
IOSurface vs. IOSurfaceRef on Catalyst
I have an IOSurface and I want to turn that into a CIImage. However, the constructor of CIImage takes a IOSurfaceRef instead of a IOSurface. On most platforms, this is not an issue because the two types are toll-free bridgeable... except for Mac Catalyst, where this fails. I observed the same back in Xcode 13 on macOS. But there I could force-cast the IOSurface to a IOSurfaceRef: let image = CIImage(ioSurface: surface as! IOSurfaceRef) This cast fails at runtime on Catalyst. I found that unsafeBitCast(surface, to: IOSurfaceRef.self) actually works on Catalyst, but it feels very wrong. Am I missing something? Why aren't the types bridgeable on Catalyst? Also, there should ideally be an init for CIImage that takes an IOSurface instead of a ref.
2
1
955
Jun ’24
Transition from "Designed for iPad" to "Mac Catalyst"
Our apps can currently be installed on Apple Silicon Macs via the iPad app on Mac feature (“Designed for iPad”). Now we are working on “proper” (universal) Catalyst-based Mac apps that will be available on the Mac App Store. How does the transition work for users that currently have the iPad version installed? Will they automatically update to the Mac Catalyst app once it’s available, or do they need to re-install the app from the Mac App Store?
1
1
810
Jul ’24
Decode video frames in lower resolution before processing
We are processing videos with Core Image filters in our apps, using an AVMutableVideoComposition (for playback/preview and export). For older devices, we want to limit the resolution at which the video frames are processed for performance and memory reasons. Ideally, we would tell AVFoundation to give us video frames with a defined maximum size into our composition. We thought setting the renderSize property of the composition to the desired size would do that. However, this only changes the size of output frames, not the size of the source frames that come into the composition's handler block. For example: let composition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in let input = request.sourceImage // <- this still has the video's original size // ... }) composition.renderSize = CGSize(width: 1280, heigth: 720) // for example So if the user selects a 4K video, our filter chain gets 4K input frames. Sure, we can scale them down inside our pipeline, but this costs resources and especially a lot of memory. It would be way better if AVFoundation could decode the video frames in the desired size already before passing it into the composition handler. Is there a way to tell AVFoundation to load smaller video frames?
0
1
603
Nov ’24
Compiling CI kernels at runtime
In the "Explore Core Image kernel improvements" session, David mentioned that it is now possible to compile [[stitchable]] CI kernels at runtime. However, I fail to get it working. The kernel requires the #import of <CoreImage/CoreImage.h> and linking against the CoreImage Metal library. But I don't know how to link against the library when compiling my kernel at runtime. Also, according to the Metal Best Practices Guide, "the #include directive is not supported at runtime for user files." Any guidance on how the runtime compilation works is much appreciated! 🙂
3
0
1.6k
Sep ’21
Allow 16-bit RGBA image formats as input/output of MLModels
Starting in iOS 16 and macOS Ventura, OneComponent16Half will be a new scalar type for Images. Ideally, we would also like to use the 16-bit support for RGBA images. As of now, we need to make an indirection using MLMultiArray with Float (Float16 with the update) set as type and copy the data into the desired image buffer. Direct usage of 16-bit RGBA predictions in Image format would be ideal for some applications requiring high precision outputs, like models that are trained on EDR image data. This is also useful when integrating Core ML into Core Image pipelines since CI’s internal image format is 16-bit RGBA by default. When passing that into a Neural Style Transfer model with (8-bit) RGBA image input/output type, conversions are always necessary (as demonstrated in WWDC2022-10027). If we could modify the models to use 16-bit RGBA images instead, no conversion would be necessary anymore. Thanks for the consideration!
3
0
1.4k
Jun ’22
Core ML model execution sometimes fails under load
I'm processing a 4K video with a complex Core Image pipeline that also invokes a neural style transfer Core ML model. This works very well, but sometimes, for very few frames, the model execution fails with the following error messages: Execution of the command buffer was aborted due to an error during execution. Internal Error (0000000e:Internal Error) Error: command buffer exited with error status. The Metal Performance Shaders operations encoded on it may not have completed. Error: (null) Internal Error (0000000e:Internal Error) <CaptureMTLCommandBuffer: 0x280b95d90> -> <AGXG15FamilyCommandBuffer: 0x108f143c0> label = <none> device = <AGXG15Device: 0x106034e00> name = Apple A16 GPU commandQueue = <AGXG15FamilyCommandQueue: 0x1206cee40> label = <none> device = <AGXG15Device: 0x106034e00> name = Apple A16 GPU retainedReferences = 1 [espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": Internal Error (0000000e:Internal Error); code=1 status=-1 [coreml] Error computing NN outputs -1 [coreml] Failure in -executePlan:error:. It's really hard to reproduce it since it only happens occasionally. I also didn't find a way to access that Internal Error mentioned, so I don't know the real reason why it fails. Any advice would be appreciated!
1
0
1.9k
Oct ’22
CIColorCube sometimes producing no or broken output in macOS 13
With macOS 13, the CIColorCube and CIColorCubeWithColorSpace filters gained the extrapolate property for supporting EDR content. When setting this property, we observe that the outputImage of the filter sometimes (~1 in 3 tries) just returns nil. And sometimes it “just” causes artifacts to appear when rendering EDR content (see screenshot below). The artifacts even appear sometimes when extrapolate was not set. input | correct output | broken output This was reproduced on Intel-based and M1 Macs. All of our LUT-based filters in our apps are broken in this way and we could not find a workaround for the issue so far. Does anyone experice the same?
2
0
1.4k
Oct ’22
Issues with new MLE5Engine in Core ML
There seems to be a new MLE5Engine in iOS 17 and macOS 14, that causes issues with our style transfer models: The output is wrong (just gray pixels) and not the same as on iOS 16. There is a large memory leak. The memory consumption is increasing rapidly with each new frame. Concerning 2): There are a lot of CVPixelBuffers leaking during prediction. Those buffers somehow have references to themselves and are not released properly. Here is a stack trace of how the buffers are created: 0 _malloc_zone_malloc_instrumented_or_legacy 1 _CFRuntimeCreateInstance 2 CVObject::alloc(unsigned long, _CFAllocator const*, unsigned long, unsigned long) 3 CVPixe Buffer::alloc(_CFAllocator const*) 4 CVPixelBufferCreate 5 +[MLMultiArray(ImageUtils) pixelBufferBGRA8FromMultiArrayCHW:channelOrderIsBGR:error:] 6 MLE5OutputPixelBufferFeatureValueByCopyingTensor 7 -[MLE5OutputPortBinder _makeFeatureValueFromPort:featureDescription:error:] 8 -[MLE5OutputPortBinder _makeFeatureValueAndReturnError:] 9 __36-[MLE5OutputPortBinder featureValue]_block_invoke 10 _dispatch_client_callout 11 _dispatch_lane_barrier_sync_invoke_and_complete 12 -[MLE5OutputPortBinder featureValue] 13 -[MLE5OutputPort featureValue] 14 -[MLE5ExecutionStreamOperation outputFeatures] 15 -[MLE5Engine _predictionFromFeatures:options:usingStream:operation:error:] 16 -[MLE5Engine _predictionFromFeatures:options:error:] 17 -[MLE5Engine predictionFromFeatures:options:error:] 18 -[MLDelegateModel predictionFromFeatures:options:error:] 19 StyleModel.prediction(input:options:) When manually disabling the use of the MLE5Engine, the models run as expected. Is this an issue caused by our model, or is it a bug in Core ML?
4
0
2.5k
Oct ’23
VNStatefulRequest in Core Image
The new VNGeneratePersonSegmentationRequest is a stateful request, i.e. it keeps state and improves the segmentation mask generation for subsequent frames. There is also the new CIPersonSegmentationFilter as a convenient way for using the API with Core Image. But since the Vision request is stateful, I was wondering how this is handled by the Core Image filter. Does the filter also keep state between subsequent calls? How is the "The request requires the use of CMSampleBuffers with timestamps as input" requirement of VNStatefulRequest ensured?
0
0
890
Jun ’21
PHPicker: add cloud indicator
Adding to my previous post, it would be great if the PHPicker would display if an asset is only available in the cloud and would need to be downloaded first. This might give the user a hint that the loading process might take longer and might cause network traffic. Right now, it's unclear for the user (and for us developers) that an asset needs to be downloaded. A small cloud icon would help a lot, I think. (FB9221095) Thanks for considering!
1
0
851
Jun ’21
CIImageProcessorKernel output texture not allowed as render target on macOS
We are implementing a CIImageProcessorKernel that uses an MTLRenderCommandEncoder to perform some mesh-based rendering into the output’s metalTexture. This works on iOS, but crashes on macOS. This is because the usage of the texture does not specify renderTarget on those devices—but not always. Sometimes the output’s texture can be used as renderTarget, but sometimes not. It seems there are both kinds of textures in CIs internal texture cache, and which one is used depends on the order in which filters are executed. So far we only observed this on macOS (on different Macs, even on M1 and macOS 12 Beta) but not on iOS (also not on an M1 iPad). We would expect to always be able to use the output’s texture as render target so we can use it as a color attachment for the render pass. Is there some way to configure a CIImageProcessorKernel to always get renderTarget output textures? Or do we really need to render into a temporary texture and blit the result into the output texture? This would be a huge waste of memory and time…
1
0
746
Aug ’21
External build configuration for framework target
We have a Filters framework that contains many image processing filters (written in Swift and Metal) and the resources they require (like ML models and static images). But not every app we have uses all the filters in Filters. Rather we want to only build and bundle the required filters and resources that are needed by the app. The only way we can think of to achieve that is to create different framework targets in Xcode, one for each app. But that would require that the Filters framework project “knows” all of its consumers (apps) and we would rather like to avoid that. Especially since the filters are in a separate repository. Is there a way to, for instance, pass some kind of configuration file to the framework that is used at build time to decide which files to build and bundle?
0
0
851
Nov ’21
Support tiling in ML-based CIImageProcessorKernel
I would like to know if there are some best practices for integrating Core ML models into a Core Image pipeline, especially when it comes to support for tiling. We are using a CIImageProcessorKernel for integrating an MLModel-based filtering step into our filter chain. The wrapping CIFilter that actually calls the kernel handles the scaling of the input image to the size the model input requires. In the roi(forInput:arguments:outputRect:) method the kernel signals that it always requires the full extent of the input image in order to produce an output (since MLModels don't support tiling). In the process(with:arguments:output:) method, the kernel is performing the prediction of the model on the input pixel buffer and then copies the result into the output buffer. This works well until the filter chain is getting more and more complex and input images become larger. At this point, Core Image wants to perform tiling to stay within the memory limits. It can't tile the input image of the kernel since we defined the ROI to be the whole image. However, it is still calling the process(…) method multiple times, each time demanding a different tile/region of the output to be rendered. But since the model doesn't support producing only a part of the output, we effectively have to process the whole input image again for each output tile that should be produced. We already tried caching the result of the model run between consecutive calls to process(…). However, we are unable to identify that the next call still belongs to the same rendering call, but for a different tile, instead of being a different rendering entirely, potentially with a different input image. If we'd have access to the digest that Core Image computes for an image during processing, we would be able to detect if the input changed between calls to process(…). But this is not part of the CIImageProcessorInput. What is the best practice here to avoid needless reevaluation of the model? How does Apple handle that in their ML-based filters like CIPersonSegmentation?
1
0
1.1k
Mar ’22
Hardware camera access from inside a Camera Extension
While trying to re-create the CIFilterCam demo shown in the WWDC session, I hit a roadblock when trying to access a hardware camera from inside my extension. Can I simply use an AVCaptureSession + AVCaptureDeviceInput + AVCaptureVideoDataOutput to get frames from an actual hardware camera and pass them to the extension's stream? If yes, when should I ask for camera access permissions? It seems the extension code is run as soon as I install the extension, but I never get prompted for access permission. Do I need to set up the capture session lazily? What's the best practice for this use case?
1
0
1.5k
Jul ’22