Post

Replies

Boosts

Views

Activity

Reply to CIImage.clampedToExtent() doesn't fill some edges
I guess it has to do with the affine transform you apply to the image before you do the clampedToExtent(): As you can see, the output of the transform has non-integer extent. This probably creates a row of transparent pixels at the top of the image, that is then repeated when applying clampedToExtent(). To avoid this, you can calculate the scale factors for your transform separately for x and y to ensure that the resulting pixel size is integer. Alternatively, you can apply the clamping before you apply the affine transform.
Topic: Programming Languages SubTopic: Swift Tags:
Dec ’23
Reply to Use CoreImage filters on Vision Pro (visionOS) view
So far, there is no API in visionOS that allows developers access to the live video feed. This is by design, and most likely to protect the user's privacy: While on an iPhone you explicitly consent to sharing your surroundings with an app by pointing your camera at things, you can't really avoid that on the Apple Vision Pro.
Topic: Spatial Computing SubTopic: ARKit Tags:
Jan ’24
Reply to IOSurface vs. IOSurfaceRef on Catalyst
It turns out that the issue is only occurring when there is also a CIContext being initialized in the same file (it doesn’t matter where in the file). As soon as I remove the CIContext, the compiler doesn't complain anymore about the cast to IOSurfaceRef (doesn't even need to be a force-cast), and there is also no runtime error.
Topic: Media Technologies SubTopic: General Tags:
Jun ’24
Reply to CIImageProcessorKernel using Metal Compute Pipeline error
I'm not sure using Core Image is the best choice here. CI might impose limits on the runtime of kernels, and your regression kernel seems too expensive. It's also not intended that you pass the images and mask as CGImage via the arguments into the kernel. It would be better if you'd convert them to CIImage first and then pass them via the inputs parameter. CI would then convert them to Metal textures for you. Unfortunately, Core Image doesn't support texture arrays, so you would need to find a workaround for that. Have you tried running your kernel in a pure Metal pipeline? It might be the better choice here. Or do you need it to be part of a Core Image pipeline?
Topic: Graphics & Games SubTopic: Metal Tags:
Aug ’24
Reply to Symbol missing when building in Xcode cloud with Xcode 16 beta and running on macOS 14.6
We found a workaround for this issue by replacing NSDecimalRound with the following helper (works when coming from a Double): extension Double { /// Helper for rounding a number to a fixed number of decimal places (`scale`). /// /// This is a replacement for `NSDecimalRound`, which causes issues in release builds /// with the Xcode 16 RC. func roundedDecimal(scale: Int = 0, rule: FloatingPointRoundingRule = .toNearestOrEven) -> Decimal { let significand = Decimal((self * pow(10, Double(scale))).rounded(rule)) return Decimal(sign: self.sign, exponent: -scale, significand: significand) } }
Sep ’24
Reply to Core Image Tiling and ROI
I recommend checking out the old Core Image Programming Guide on how to supply a ROI function. Basically, you are given a rect in the target image (that your filter should produce) and are asked what part of the input image your filter needs to produce it. For small images, this is usually not relevant because Core Image processes the whole image in one go. For large images, however, CI applies tiling, i.e., processing slices of the image in sequence and stitching them together in the end. For this, the ROI is very important. In your mirroring example, the first tile might be the left side of the image and the second tile the right side. When your ROI is asked what part of the input is needed to produce the left side of the result, you need to return the right side of the input image because it's mirrored along the x-axis, and vise versa. So you basically have to apply the same x-mirroring trick you use when sampling to mirror the rect in your ROI callback.
Topic: Graphics & Games SubTopic: General Tags:
Oct ’24
Reply to CIImage.clampedToExtent() doesn't fill some edges
I guess it has to do with the affine transform you apply to the image before you do the clampedToExtent(): As you can see, the output of the transform has non-integer extent. This probably creates a row of transparent pixels at the top of the image, that is then repeated when applying clampedToExtent(). To avoid this, you can calculate the scale factors for your transform separately for x and y to ensure that the resulting pixel size is integer. Alternatively, you can apply the clamping before you apply the affine transform.
Topic: Programming Languages SubTopic: Swift Tags:
Replies
Boosts
Views
Activity
Dec ’23
Reply to Reset specific tip
I think there is no official API to reset single tips. As a workaround, you could change the id of the tip. This will cause TipKit to handle it as a new tip and show it again.
Topic: App & System Services SubTopic: General Tags:
Replies
Boosts
Views
Activity
Dec ’23
Reply to Sharing a JPEG via Action or Share Extension fails in Photos on macOS
Filed as FB13531865. Thanks!
Topic: App & System Services SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jan ’24
Reply to Use CoreImage filters on Vision Pro (visionOS) view
So far, there is no API in visionOS that allows developers access to the live video feed. This is by design, and most likely to protect the user's privacy: While on an iPhone you explicitly consent to sharing your surroundings with an app by pointing your camera at things, you can't really avoid that on the Apple Vision Pro.
Topic: Spatial Computing SubTopic: ARKit Tags:
Replies
Boosts
Views
Activity
Jan ’24
Reply to CAMetalLayer renders HDR images with a color shift
Did you set wantsExtendedDynamicRangeContent on the CAMetalLayer? What happens when you set the layer's colorSpace to some HDR color space? Also, make sure that you set the CIRenderDestinations colorSpace to the same space as the layer.
Topic: UI Frameworks SubTopic: AppKit Tags:
Replies
Boosts
Views
Activity
Feb ’24
Reply to Lossy option has no effect when exporting PNG to HEIF
I can confirm the issue (tested the Core Image API). Interestingly, the heif10Representation(...) API still works as expected.
Topic: Media Technologies SubTopic: General Tags:
Replies
Boosts
Views
Activity
Mar ’24
Reply to CIFormat static var
The static properties on CIFormat are lets now in iOS 18 / macOS 15. 👍
Replies
Boosts
Views
Activity
Jun ’24
Reply to Apple Intellegence not working
It's not yet available: Apple Intelligence will be available in an upcoming beta. https://developer.apple.com/apple-intelligence/
Replies
Boosts
Views
Activity
Jun ’24
Reply to "Sensitive language" errors
Any updates on this issue? I unfortunately still can't ask a question about App Intents and Apple Intelligence. 😕
Replies
Boosts
Views
Activity
Jun ’24
Reply to IOSurface vs. IOSurfaceRef on Catalyst
It turns out that the issue is only occurring when there is also a CIContext being initialized in the same file (it doesn’t matter where in the file). As soon as I remove the CIContext, the compiler doesn't complain anymore about the cast to IOSurfaceRef (doesn't even need to be a force-cast), and there is also no runtime error.
Topic: Media Technologies SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jun ’24
Reply to CIImageProcessorKernel using Metal Compute Pipeline error
I'm not sure using Core Image is the best choice here. CI might impose limits on the runtime of kernels, and your regression kernel seems too expensive. It's also not intended that you pass the images and mask as CGImage via the arguments into the kernel. It would be better if you'd convert them to CIImage first and then pass them via the inputs parameter. CI would then convert them to Metal textures for you. Unfortunately, Core Image doesn't support texture arrays, so you would need to find a workaround for that. Have you tried running your kernel in a pure Metal pipeline? It might be the better choice here. Or do you need it to be part of a Core Image pipeline?
Topic: Graphics & Games SubTopic: Metal Tags:
Replies
Boosts
Views
Activity
Aug ’24
Reply to Symbol missing when building in Xcode cloud with Xcode 16 beta and running on macOS 14.6
Same here. 🖐️
Replies
Boosts
Views
Activity
Sep ’24
Reply to Symbol missing when building in Xcode cloud with Xcode 16 beta and running on macOS 14.6
We found a workaround for this issue by replacing NSDecimalRound with the following helper (works when coming from a Double): extension Double { /// Helper for rounding a number to a fixed number of decimal places (`scale`). /// /// This is a replacement for `NSDecimalRound`, which causes issues in release builds /// with the Xcode 16 RC. func roundedDecimal(scale: Int = 0, rule: FloatingPointRoundingRule = .toNearestOrEven) -> Decimal { let significand = Decimal((self * pow(10, Double(scale))).rounded(rule)) return Decimal(sign: self.sign, exponent: -scale, significand: significand) } }
Replies
Boosts
Views
Activity
Sep ’24
Reply to Core Image deadlock on Sequoia
Objective-C or Swift? You can check out my MTKView subclass for an example on how to render CoreImage output with Metal. I hope that helps.
Topic: Graphics & Games SubTopic: General Tags:
Replies
Boosts
Views
Activity
Oct ’24
Reply to Core Image Tiling and ROI
I recommend checking out the old Core Image Programming Guide on how to supply a ROI function. Basically, you are given a rect in the target image (that your filter should produce) and are asked what part of the input image your filter needs to produce it. For small images, this is usually not relevant because Core Image processes the whole image in one go. For large images, however, CI applies tiling, i.e., processing slices of the image in sequence and stitching them together in the end. For this, the ROI is very important. In your mirroring example, the first tile might be the left side of the image and the second tile the right side. When your ROI is asked what part of the input is needed to produce the left side of the result, you need to return the right side of the input image because it's mirrored along the x-axis, and vise versa. So you basically have to apply the same x-mirroring trick you use when sampling to mirror the rect in your ROI callback.
Topic: Graphics & Games SubTopic: General Tags:
Replies
Boosts
Views
Activity
Oct ’24