Post

Replies

Boosts

Views

Activity

Family Sharing for IAP and subscriptions
One of the announcements from WWDC 2020 was that Family Sharing will be available for IAP and subscriptions: And in addition to shared family app purchases, the App Store now supports Family Sharing for subscriptions and in-app purchases. This is great for developers who offer content for the whole family to enjoy. How can we enable this and is it only available in iOS/iPadOS 14?
11
0
4.7k
Mar ’21
GenerateAssetSymbols wrongly renaming image assets
Xcode generates symbols for image and color assets now (which is super nice!). We noticed an issue when generating code symbols for our image assets. We have an image named inputContour and one named inputContourColor. Xcode was not able to generate a symbol for inputContourColor and instead produced the following warning: #warning("The \"inputContourColor\" image asset name resolves to the symbol \"inputContour\" which already exists. Try renaming the asset.") It turns out that the generator automatically removes the color suffix when creating the code symbol. For color assets, this probably make sense (so tealColor would become just teal), but this should not be applied to image resources.
10
1
5.6k
Dec ’23
PHPicker fails to load RAW images
We observed that the PHPicker is unable to load RAW images captured on an iPhone in some scenarios. And it is also somehow related to iCloud. Here is the setup: The PHPickerViewController is configured with preferredAssetRepresentationMode = .current to avoid transcoding. The image is loaded from the item provider like this: if itemProvider.hasItemConformingToTypeIdentifier(kUTTypeImage) { itemProvider.loadFileRepresentation(forTypeIdentifier: kUTTypeImage) { url, error in // work } } This usually works, also for RAW images. However, when trying to load a RAW image that has just been captured with the iPhone, the loading fails with the following errors on the console: [claims] 43A5D3B2-84CD-488D-B9E4-19F9ED5F39EB grantAccessClaim reply is an error: Error Domain=NSCocoaErrorDomain Code=4097 "Couldn’t communicate with a helper application." UserInfo={NSUnderlyingError=0x2804a8e70 {Error Domain=NSCocoaErrorDomain Code=4097 "connection from pid 19420 on anonymousListener or serviceListener" UserInfo={NSDebugDescription=connection from pid 19420 on anonymousListener or serviceListener}}} Error copying file type public.image. Error: Error Domain=NSItemProviderErrorDomain Code=-1000 "Cannot load representation of type public.image" UserInfo={NSLocalizedDescription=Cannot load representation of type public.image, NSUnderlyingError=0x280480540 {Error Domain=NSCocoaErrorDomain Code=4097 "Couldn’t communicate with a helper application." UserInfo={NSUnderlyingError=0x2804a8e70 {Error Domain=NSCocoaErrorDomain Code=4097 "connection from pid 19420 on anonymousListener or serviceListener" UserInfo={NSDebugDescription=connection from pid 19420 on anonymousListener or serviceListener}}}}} We observed that on some devices, loading the image will actually work after a short time (~30 sec), but on others it will always fail. We think it is related to iCloud Photos: On the device that has iCloud Photos sync enabled, the picker is able to load the image right after it was synced to the cloud. On devices that don't sync the image, loading always fails. It seems that the sync process is doing some processing (?) of the image that will later enable the picker to load it successfully, but that's just guessing. Additional observations: This seems to only occur for images that were taken with the stock Camera app. When using Halide to capture RAW (either ProRAW or RAW), the Picker is able to load the image. When trying to load the image as kUTTypeRawImage instead of kUTTypeImage, it also fails. The picker also can't load RAW images that were AirDroped from another device, unless it synced to iCloud first. This is reproducable using the Selecting Photos and Videos in iOS sample code project. We observed this happening in other apps that use the PHPicker, not just ours. Is this a bug, or is there something that we are missing?
7
1
2.7k
Sep ’25
Xcode Cloud + dynamic library package
I'm trying to archive my app via Xcode Cloud and auto-deploy to TestFlight but it fails during "Prepare Build for App Store Connect" with the following error message: ITMS-90334: Invalid Code Signature Identifier. The identifier "CoreImageExtensions-dynamic-555549444e09e22796a23eadb2704bf219d5c1fa" in your code signature for "CoreImageExtensions-dynamic" must match its Bundle Identifier "CoreImageExtensions-dynamic" CoreImageExtensions-dynamic is a .dynamic library target of a package that we are using. It seems that at some point a UUID is added to the library's identifier, which messes with code signing. When archiving and uploading the app in Xcode directly, everything works just fine. Any idea why this is happening and how I could fix it?
5
0
3k
Oct ’22
Issues with new MLE5Engine in Core ML
There seems to be a new MLE5Engine in iOS 17 and macOS 14, that causes issues with our style transfer models: The output is wrong (just gray pixels) and not the same as on iOS 16. There is a large memory leak. The memory consumption is increasing rapidly with each new frame. Concerning 2): There are a lot of CVPixelBuffers leaking during prediction. Those buffers somehow have references to themselves and are not released properly. Here is a stack trace of how the buffers are created: 0 _malloc_zone_malloc_instrumented_or_legacy 1 _CFRuntimeCreateInstance 2 CVObject::alloc(unsigned long, _CFAllocator const*, unsigned long, unsigned long) 3 CVPixe Buffer::alloc(_CFAllocator const*) 4 CVPixelBufferCreate 5 +[MLMultiArray(ImageUtils) pixelBufferBGRA8FromMultiArrayCHW:channelOrderIsBGR:error:] 6 MLE5OutputPixelBufferFeatureValueByCopyingTensor 7 -[MLE5OutputPortBinder _makeFeatureValueFromPort:featureDescription:error:] 8 -[MLE5OutputPortBinder _makeFeatureValueAndReturnError:] 9 __36-[MLE5OutputPortBinder featureValue]_block_invoke 10 _dispatch_client_callout 11 _dispatch_lane_barrier_sync_invoke_and_complete 12 -[MLE5OutputPortBinder featureValue] 13 -[MLE5OutputPort featureValue] 14 -[MLE5ExecutionStreamOperation outputFeatures] 15 -[MLE5Engine _predictionFromFeatures:options:usingStream:operation:error:] 16 -[MLE5Engine _predictionFromFeatures:options:error:] 17 -[MLE5Engine predictionFromFeatures:options:error:] 18 -[MLDelegateModel predictionFromFeatures:options:error:] 19 StyleModel.prediction(input:options:) When manually disabling the use of the MLE5Engine, the models run as expected. Is this an issue caused by our model, or is it a bug in Core ML?
4
0
2.5k
Oct ’23
Image Playground not available for "Designed for iPad" apps?
I'm currently trying to add support for Image Playground to our apps. It seems that it's not working in an app that is "Designed for iPad" and runs on a Mac. The modal just shows a spinner and the following is logged to console: Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none> Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none> Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none> Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none> GP extension could not be loaded: Extension (platform: 2) could not be found (in update) dealloc Query controller [C32BA176-6A3E-465D-B3C5-0F8D91068B89] ImagePlaygroundViewController.isAvailable returns true, however. In a "real" Mac Catalyst app, it's working. Just not when the app is actually an iPad app. Is this a bug?
4
2
1.5k
Jan ’25
Compiling CI kernels at runtime
In the "Explore Core Image kernel improvements" session, David mentioned that it is now possible to compile [[stitchable]] CI kernels at runtime. However, I fail to get it working. The kernel requires the #import of <CoreImage/CoreImage.h> and linking against the CoreImage Metal library. But I don't know how to link against the library when compiling my kernel at runtime. Also, according to the Metal Best Practices Guide, "the #include directive is not supported at runtime for user files." Any guidance on how the runtime compilation works is much appreciated! 🙂
3
0
1.6k
Sep ’21
Allow 16-bit RGBA image formats as input/output of MLModels
Starting in iOS 16 and macOS Ventura, OneComponent16Half will be a new scalar type for Images. Ideally, we would also like to use the 16-bit support for RGBA images. As of now, we need to make an indirection using MLMultiArray with Float (Float16 with the update) set as type and copy the data into the desired image buffer. Direct usage of 16-bit RGBA predictions in Image format would be ideal for some applications requiring high precision outputs, like models that are trained on EDR image data. This is also useful when integrating Core ML into Core Image pipelines since CI’s internal image format is 16-bit RGBA by default. When passing that into a Neural Style Transfer model with (8-bit) RGBA image input/output type, conversions are always necessary (as demonstrated in WWDC2022-10027). If we could modify the models to use 16-bit RGBA images instead, no conversion would be necessary anymore. Thanks for the consideration!
3
0
1.4k
Jun ’22
Unable to change Photos permission of iPad app on Mac
Users can run our apps on Macs with Apple Silicon via the "iPad Apps on Mac" feature. The apps use PHPhotoLibrary.requestAuthorization(for: .addOnly, handler: callback) to request write-only access to the user's Photo Library during image export. This works as intended on macOS, but a huge problem arises when the user denies access (by accident or intentionally) and later decides that they want us to add their image to Photos: There is no way to grant this permission again. In System Preferences → Privacy &amp;amp; Security → Photos, the app is just not listed – in fact, none of the "iPad Apps on Mac" apps appear here. Not even tccutil reset all my.bundle.id works. It just reports tccutil: Failed to reset all approval status for my.bundle.id. Uninstalling, restarting the Mac, and reinstalling the app also doesn't work. The system seems to remember the initial decision. Is this an oversight in the integration of those apps with macOS, or are we missing something fundamental here? Is there maybe a way to prompt the user again?
3
3
1.5k
Sep ’23
SwiftUI: popoverTip prevents other modal views from appearing
I want to show a tip on a button that will open a modal sheet on tap. The state if the sheet should be presented is held in an ObservableObject view model: @MainActor class ViewModel: ObservableObject { @Published var showSheet = false } struct ContentView: View { var tip = PopoverTip() @ObservedObject var viewModel = ViewModel() var body: some View { Button(action: { viewModel.showSheet.toggle() }, label: { Text("Button") }) .popoverTip(tip) .sheet(isPresented: $viewModel.showSheet) { Text("Sheet") } } } Here is the issue: When the tip is dismissed by tapping outside of it instead of tapping the close button, the tip will always reappear when tapping the button instead of showing the sheet. So effectively there is no way of triggering the actual button action, the tip will always pop up again and prevent the sheet from appearing. This is only an issue when using an ObservableObject to track the sheet state. When using a @State var showSheet: Bool inside the view itself instead, the sheet is shown as expected when tapping the button. It seems to be an issue of timing: Attempting to show the sheet somehow causes the view to be re-evaluated, which causes the tip to reappear (since it wasn't dismissed via close action). And since the tip is presented using modal presentation, the sheet can't be presented anymore. Is this a bug, or is there a simple way to avoid this issue?
3
3
2.1k
Oct ’23
Sharing a JPEG via Action or Share Extension fails in Photos on macOS
We have a Share Extension that fails in Photos on macOS when trying to share a JPEG image for the following reason: From the NSItemProvider we get from the NSExtensionItem.attachments, we try to load the image using loadFileRepresentation(forTypeIdentifier: “public.image”, completionHandler: …). This fails for .jpeg images in the library. There seems to be a mismatch in expected and actual file extension internally. Here is the log: Error copying file type public.image. Error: Error Domain=NSItemProviderErrorDomain Code=-1000 "Cannot load representation of type public.jpeg" UserInfo={NSLocalizedDescription=Cannot load representation of type public.jpeg, NSUnderlyingError=0x1527c1a80 {Error Domain=NSItemProviderErrorDomain Code=-1 "Cannot copy file at URL file:///Users/frank/Library/Containers/com.apple.Photos/Data/tmp/TemporaryItems/ShareKit-Exports/7CCFA760-AAC9-42B0-812D-68F051ED1543/F912E593-2BE5-4E70-86AB-7657A40657E5/IMG_3517.jpg." UserInfo={NSLocalizedDescription=Cannot copy file at URL file:///Users/frank/Library/Containers/com.apple.Photos/Data/tmp/TemporaryItems/ShareKit-Exports/7CCFA760-AAC9-42B0-812D-68F051ED1543/F912E593-2BE5-4E70-86AB-7657A40657E5/IMG_3517.jpg., NSUnderlyingError=0x152789670 {Error Domain=NSItemProviderErrorDomain Code=-1 "Cannot create a temporary file. Error: Undefined error: 0" UserInfo={NSLocalizedDescription=Cannot create a temporary file. Error: Undefined error: 0}}}}}``` In the specified folder, there is an image, however, it’s named IMG_3517.jpeg, not IMG_3517.jpg. This seems to be a bug in Photo’s item provider implementation. If we use loadObject(ofClass: URL.self, completionHandler: …) instead, we get the correct .jpeg URL in the completion handler.
3
1
1.3k
Nov ’25
Allow PHPicker access to original/unadjusted asset
The newish PHPicker is a great way to access the users’ photo library without requiring fill access permissions. However, this is currently no way for accessing the original or unadjusted version of an asset. The preferredAssetRepresentationMode of the PHPickerConfiguration only allows the options automatic, compatible, and current, where current still returns the asset with previous adjustments applied. The option only seems to impact potential asset transcoding. In contrast, when fetching PHAsset data, one can specify the PHImageRequestOptionsVersion unadjusted or original, which give access to the underlying untouched image. It would be great to have these options in the PHPicker interface as well. The alternative would be to load the image through PHAsset via the identifier returned by the picker, but that would require full library access, which I want to avoid. Or is there another way to access the original image without these permissions?
2
0
977
Oct ’21
EDR support for images
More and more iOS devices can capture content with high/extended dynamic range (HDR/EDR) now, and even more devices have screens that can display that content properly. Apple also gave us developers the means to correctly display and process this EDR content in our apps on macOS and now also on iOS 16. There are a lot of EDR-related sessions from WWDC 2021 and 2022. However, most of them focus on HDR video but not images—even though Camera captures HDR images by default on many devices. Interestingly, those HDR images seem to use a proprietary format that relies on EXIF metadata and an embedded HDR gain map image for displaying the HDR effect in Photos. Some observations: Only Photos will display those metadata-driven HDR images in their proper brightness range. Files, for instance, does not. Photos will not display other HDR formats like OpenEXR or HEIC with BT.2100-PQ color space in their proper brightness. When using the PHPicker, it will even automatically tone-map the EDR values of OpenEXR images to SDR. The only way to load those images is to request the original image via PHAsset, which requires photo library access. And here comes my main point: There is no API that enables us developers to load iPhone HDR images (with metadata and gain map) that will decode image + metadata into EDR pixel values. That means we cannot display and edit those images in our app the same way as Photos. There are ways to extract and embed the HDR gain maps from/into images using Image I/O APIs. But we don't know the algorithm used to blend the gain map with the image's SDR pixel values to get the EDR result. It would be very helpful to know how decoding and encoding from SDR + gain map to HDR and back works. Alternatively (or in addition), it would be great if common image loading APIs like Image I/O and Core Image would provide APIs to load those images into an EDR image representation (16-bit float linear sRGB with extended values, for example) and write EDR images into SDR + gain map images so that they are correctly displayed in Photos. Thanks for your consideration! We really want to support HDR content in our image editing apps, but without the proper APIs, we can only guess how image HDR works on iOS.
2
8
2.3k
Sep ’22
CIColorCube sometimes producing no or broken output in macOS 13
With macOS 13, the CIColorCube and CIColorCubeWithColorSpace filters gained the extrapolate property for supporting EDR content. When setting this property, we observe that the outputImage of the filter sometimes (~1 in 3 tries) just returns nil. And sometimes it “just” causes artifacts to appear when rendering EDR content (see screenshot below). The artifacts even appear sometimes when extrapolate was not set. input | correct output | broken output This was reproduced on Intel-based and M1 Macs. All of our LUT-based filters in our apps are broken in this way and we could not find a workaround for the issue so far. Does anyone experice the same?
2
0
1.4k
Oct ’22
Large memory consumption when running Core ML model on A13 GPU
We recently had to change our MLModel's architecture to include custom layers, which means the model can't run on the Neural Engine anymore. After the change, we observed a lot of crashes being reported on A13 devices. It turns out that the memory consumption when running the prediction with the new model on the GPU is much higher than before, when it was running on the Neural Engine. Before, the peak memory load was ~350 MB, now it spikes over 2 GB, leading to a crash most of the time. This only seems to happen on the A13. When forcing the model to only run on the CPU, the memory consumption is still high, but the same as running the old model on the CPU (~750 MB peak). All tested on iOS 16.1.2. We profiled the process in Instruments and found that there are a lot of memory buffers allocated by Core ML that are not freed after the prediction. The allocation stack trace for those buffers is the following: We ran the same model on a different device and found the same buffers in Instruments, but there they are only 4 KB in size. It seems, Core ML is somehow massively over-allocating memory when run on the A13 GPU. So far we limit the model to only run on CPU for those devices, but this is far from ideal. Is there any other model setting or workaround that we can use to avoid this issue?
2
0
1.5k
Dec ’22