I am extracting a JPEG2000 (JP2) facial image from an NFC passport chip (ISO/IEC 19794-5) and attempting to create a UIImage from it.
On iOS 16, the following code works fine:
import ImageIO
import UIKit
func getUIImage(from imageData: [UInt8]) -> UIImage? {
let data = Data(imageData)
guard let imageSource = CGImageSourceCreateWithData(data as CFData, nil),
let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil) else {
print("Failed to decode JP2 image!")
return nil
}
return UIImage(cgImage: cgImage)
}
However, on iOS 18, this fails with errors like:
initialize:1415: *** invalid JPEG2000 file ***
makeImagePlus:3752: *** ERROR: 'JP2 ' - failed to create image [-50]
CGImageSourceCreateImageAtIndex: *** ERROR: failed to create image [-59]
Questions:
Did Apple remove or modify JPEG2000 support in iOS 18?
Is there an official workaround for decoding JPEG2000 on iOS 18?
Should I use Vision/Metal/Core Image instead?
Is there a recommended way to convert JPEG2000 to JPEG/PNG before creating a UIImage?
Are there any Apple-provided APIs that maintain backward compatibility for JPEG2000 decoding?
Additional Info:
The UInt8 array has a valid JPEG2000 header (0x00 0x00 0x00 0x0C 6A 50 ...).
The image works on iOS 16 but fails on iOS 18.
Tested on iPhone running iOS 18.0 beta.
Any insights on how to handle JPEG2000 decoding in iOS 18 would be greatly appreciated! 🚀
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi everyone,
I’m running into an issue with RealityKit when trying to animate BlendShapes (ShapeKeys) while a skeletal animation is playing. The model is a rigged character in .usdz format with both predefined skeletal animations and BlendShapes (exported from Blender).
The problem: when I play any animation using entity.playAnimation(...), the BlendShapes stop responding. Calling setBlendShapes(...) still logs that weights are being updated, but the visual changes are not visible.
The exact same blend shape animation works perfectly when no animation is playing.
In SceneKit the same model works as expected: shape keys get animated during animation playback. But not in realitykit
Still, as soon as an animation starts, the shape keys don’t animate anymore.
Here’s the test project on GitHub that demonstrates the issue clearly:
https://github.com/IAMTHEBURT/RealityKitWitnBlendShapesSample
The goal is to play facial expressions (like blinking or talking) while a body animation (like waving) is playing.
Is this a known limitation in RealityKit? Or is there a recommended way to combine skeletal animations with real-time BlendShape updates?
Thanks in advance for any insights.
I mean…I want to use defaults rather than launching apps via open with the saved environment variables.
This is pretty easy on iOS and other platforms. So what about in macOS?
I’m trying to play an Apple Immersive video in the .aivu format using VideoPlayerComponent using the official documentation found here:
https://developer.apple.com/documentation/RealityKit/VideoPlayerComponent
Here is a simplified version of the code I'm running in another application:
import SwiftUI
import RealityKit
import AVFoundation
struct ImmersiveView: View {
var body: some View {
RealityView { content in
let player = AVPlayer(url: Bundle.main.url(forResource: "Apple_Immersive_Video_Beach", withExtension: "aivu")!)
let videoEntity = Entity()
var videoPlayerComponent = VideoPlayerComponent(avPlayer: player)
videoPlayerComponent.desiredImmersiveViewingMode = .full
videoPlayerComponent.desiredViewingMode = .stereo
player.play()
videoEntity.components.set(videoPlayerComponent)
content.add(videoEntity)
}
}
}
Full code is here:
https://github.com/tomkrikorian/AIVU-VideoPlayerComponentIssueSample
But the video does not play in my project even though the file is correct (It can be played in Apple Immersive Video Utility) and I’m getting this error when the app crashes:
App VideoPlayer+Component Caption: onComponentDidUpdate Media Type is invalid
Domain=SpatialAudioServicesErrorDomain Code=2020631397 "xpc error" UserInfo={NSLocalizedDescription=xpc error}
CA_UISoundClient.cpp:436 Got error -4 attempting to SetIntendedSpatialAudioExperience
[0x101257490|InputElement #0|Initialize] Number of channels = 0 in AudioChannelLayout does not match number of channels = 2 in stream format.
Video I’m using is the official sample that can be found here but tried several different files shot from my clients and the same error are displayed so the issue is definitely not the files but on the RealityKit side of things:
https://developer.apple.com/documentation/immersivemediasupport/authoring-apple-immersive-video
Steps to reproduce the issue:
- Open AIVUPlayerSample project and run. Look at the logs.
All code can be found in ImmersiveView.swift
Sample file is included in the project
Expected results:
If I followed the documentation and samples provided, I should see my video played in full immersive mode inside my ImmersiveSpace.
Am i doing something wrong in the code? I'm basically following the documentation here.
Feedback ticket: FB19971306
For an app of mine I use CGSetDisplayTransferByTable to adjust the gamma table of the device. Since macOS Tahoe, these modifications are silently ignored. The display's actual gamma curve remains unchanged despite the API reporting successful completion.
I've filed a FB for it a few weeks ago, and would love to figure out what could be causing this.
FB18559786
Hi, following the recent deprecation of SceneKit, I'm trying to move a couple of my SceneKit projects to RealityKit.
One thing I can't seem to find is how to change the content scale factor when using a RealityView in SwiftUI. It was really easy to do in SceneKit with just a SCNView property, and it seems that it's also possible when using ARView, but I can't find a way to do it with a RealityView. Maybe it's a SwiftUI limitation?
Topic:
Graphics & Games
SubTopic:
RealityKit
If I create a bitmap image and then try to get ready to draw into it, like so:
NSBitmapImageRep* newRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes: nullptr
pixelsWide: 128
pixelsHigh: 128
bitsPerSample: 8
samplesPerPixel: 4
hasAlpha: YES
isPlanar: NO
colorSpaceName: NSDeviceRGBColorSpace
bitmapFormat: NSBitmapFormatAlphaNonpremultiplied |
NSBitmapFormatThirtyTwoBitBigEndian
bytesPerRow: 4 * 128
bitsPerPixel: 32];
[NSGraphicsContext setCurrentContext:
[NSGraphicsContext graphicsContextWithBitmapImageRep: newRep]];
then the log shows this error:
CGBitmapContextCreate: unsupported parameter combination:
RGB
8 bits/component, integer
512 bytes/row
kCGImageAlphaLast
kCGImageByteOrderDefault
kCGImagePixelFormatPacked
Valid parameters for RGB color space model are:
16 bits per pixel, 5 bits per component, kCGImageAlphaNoneSkipFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipLast
32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedLast
32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10|kCGImageByteOrder16Little
64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast
64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast
64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little
64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little
128 bits per pixel, 32 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents
128 bits per pixel, 32 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents
See Quartz 2D Programming Guide (available online) for more information.
If I don't use NSBitmapFormatAlphaNonpremultiplied as part of the format, I don't get the error message. My question is, why does the constant NSBitmapFormatAlphaNonpremultiplied exist if you can't use it like this?
If you're wondering why I wanted to do this: I want to extract the RGBA pixel data from an image, which might have non-premultiplied alpha. And elsewhere online, I saw advice that if you want to look at the pixels of an image, draw it into a bitmap whose format you know and look at those pixels. And I don't want the process of drawing to premultiply my alpha.
Hello, I'm tracking down a bug where useResource doesn't seem to apply proper synchronization when a resource is produced by the render pass then consumed by the compute pass, but when I use MTLFence between the to signal and wait between the render/compute encoders, the artifact goes away.
The resource is created with MTLHazardTrackingModeTracked and useResource is called on the compute encoder after the render pass. Metal API Validation doesn't report any warnings/errors.
Am I misunderstanding the difference between the two APIs? I dug through the Metal documentation and it looks like useResource should handle synchronization given the resource has MTLHazardTrackingModeTracked but on the other hand, MTLFence should be used to ensure proper synchronization between command encoders. Can someone can clarify the difference between the two APIs and when to use them.
Hello!
Brand new to the Apple developer community, so, hello everyone! I'm a game developer, we just launched our first game on PC and I'm looking to port it to ios. Time is something I'm kind of short on, and I hear it takes some jumping through hoops to get the know-how to port something to mobile. Any good sites you'd recommend for finding programmers to port your game? It's fairly simple - just a visual novel. Any and all suggestions welcome!
All the best!
Elijah
Topic:
Graphics & Games
SubTopic:
General
I use SCNScene.write To file ".usdz",
then open the ".usdz" by SCNScene.url , all Nodes color fades.
Export and load only once. The fading is not obvious. Return and repeat 4 or 5 times. It is obvious that the color is inconsistent with the original color and has become much lighter.
Developing a prototype Vision Pro app and would like to render a 3D scene made from Reality Composer Pro on an image anchor in a RealityView. But I have no luck so far to make it work and need some guidance to move on.
I got the image file stored in the assets like below:
And from below is the source codes:
import SwiftUI
import RealityKit
import RealityKitContent
struct AnchorView: View {
@State var imageEntity: Entity = {
let anchorEntity = AnchorEntity(.image(group: "AR Resources", name: "reanchor"))
return anchorEntity
}()
var body: some View {
RealityView { content in
do
{
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle)
{
imageEntity.addChild(scene)
content.add(imageEntity)
}
}
catch
{
print("Error occurs when adding reality view content: \(error)")
}
}
}
}
I am rewriting an unfinished SceneKit project as RealityKit (NonAR). As far as I can see, RealityKit is missing basic fog functionality?
Fog was simple & easy to implement in SCeneKit (fogStartDistance / fogEndDistance / fogDensityExponent / fogColor). Are there any plans to implement something like this in RealityKit?
Are there any simple workarounds?
Topic:
Graphics & Games
SubTopic:
RealityKit
Now that SceneKit has been marked as soft deprecated, is there a planned date or timeframe when it will be completely removed from iOS? I’m concerned about how long my existing SceneKit-based game will continue to work, especially as an indie developer without the resources for a quick rewrite to RealityKit.
I’m trying to create a CGContext that matches my screen using the following code
let context = CGContext(
data: pixelBuffer,
width: pixelWidth, // 3456
height: pixelHeight, // 2234
bitsPerComponent: 10, // PixelEncoding = "--RRRRRRRRRRGGGGGGGGGGBBBBBBBBBB"
bytesPerRow: bytesPerRow, // 13824
space: CGColorSpace(name: CGColorSpace.extendedSRGB)!,
bitmapInfo: CGImageAlphaInfo.none.rawValue
| CGImagePixelFormatInfo.RGBCIF10.rawValue
| CGImageByteOrderInfo.order16Little.rawValue
)
But it fails with an error
CGBitmapContextCreate: unsupported parameter combination:
RGB
10 bits/component, integer
13824 bytes/row
kCGImageAlphaNone
kCGImageByteOrder16Little
kCGImagePixelFormatRGBCIF10
Valid parameters for RGB color space model are:
16 bits per pixel, 5 bits per component, kCGImageAlphaNoneSkipFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipLast
32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedLast
32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10|kCGImageByteOrder16Little
64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast
64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast
64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little
64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little
128 bits per pixel, 32 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents
128 bits per pixel, 32 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents
See Quartz 2D Programming Guide (available online) for more information.
Why is it unsupported if it matches the 6th option? (32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10|kCGImageByteOrder16Little)
I am using Unreal Engine 5.6 on a MacBook Pro with an M3 chip and macOS 15.5. I’ve installed Xcode and accepted the license, but Unreal is not detecting the latest Metal Shader Standard (Metal v3.0). The maximum version Unreal sees is Metal v2.4, even though the hardware and OS should support Metal 3.0. I’ve also run sudo xcode-select -s /Applications/Xcode.app and accepted the license via Terminal. Is there anything in Xcode settings, SDK availability, or system permissions that could be preventing access to Metal 3.0 features?"
In Xcode version 16.2 (16C5032a) I created:
One [ScenekitApp] [File/New/Project/iOS/Game/Game Technology: SceneKit].
Three custom Frameworks [File/New/Project/iOS/Framework] [ASCSource.framework, ASCCommon.framework and ASCCoreData.framework].
In the four projects the [Minimum Deployments=iOS 15.0], [Swift version=6.0] and [BUILD_LIBRARY_FOR_DISTRIBUTION=Yes].
I directly installed the [Zip.framework] in [ASCSource.framework] without [pod init/pod install] and in the [General tab] [Frameworks, Libraries and Embeded Content] [Zip.framework Embed&Sign].
I installed the three frameworks [ASCSource.framework, ASCCommon.framework and ASCCoreData.framework] in [ScenekitApp] and everything works perfectly in Xcode version 16.2(16C5032a).
I updated Xcode version 16.2(16C5032a) to Xcode version 16.3(16E140) and some errors in the SDKs were indicated.
[Failed to build module 'ASCSource'; this SDK is not supported by the compiler (the SDK is built with 'Apple Swift version 6.0.3 effective-5.10 (swiftlang-6.0.3.1.10 clang-1600.0.30.1)', while this compiler is 'Apple Swift version 6.1 effective-5.10 (swiftlang-6.1.0.110.21 clang-1700.0.13.3)'). Please select a toolchain which matches the SDK].
[Failed to build module 'Zip'; this SDK is not supported by the compiler (the SDK is built with 'Apple Swift version 6.0.3 effective-5.10 (swiftlang-6.0.3.1.10 clang-1600.0.30.1)', while this compiler is 'Apple Swift version 6.1 effective-5.10 (swiftlang-6.1.0.110.21 clang-1700.0.13.3)'). Please select a toolchain which matches the SDK].
If I recompile all frameworks in Xcode version 16.3 (16E140) it will work, but I don't think it will be a good solution.
I found some discussions in links like the following without success.
https://github.com/firebase/firebase-ios-sdk/issues/13727
https://stackoverflow.com/questions/70556401/swift-version-conflict-this-sdk-is-not-supported-by-the-compiler-please-select
Any help is welcome.
Thanks everyone!!!
I'm trying to get video material to work on an imported 3D asset, and this asset is a USDC file. There's actually an example in this WWDC video from Apple. You can see it running on the flag in this airplane, but there are no examples of this, and there are no other examples on the internet. Does anybody know how to do this?
You can look at 10:34 in this video.
https://developer.apple.com/documentation/realitykit/videomaterial
Our app uses Metal for image processing. We have found that if our app (and its possible intensive image processing) is started quickly after user is logged in, then calls to Metal may be hanging/stuck for a good while.
Example: it can take 1-2 minutes for something that usually takes 3-5 seconds! Metal threads are just hanging in a memmove...
In Activity Monitor we see a lot of things are happening right after log-in. But why Metal calls are blocking for so long is unknown to us...
The workaround is to wait a minute before we start our app and start intensive image processing using Metal. But hard to explain this workaround to end-users...
It doesn't happen on all computers but fairly easy to reproduce on some computers.
We are using macOS 15.3.1. M1/M3 Max.
Any good ideas for how to proceed with this problem and possible reach out to Apple engineers?
Thanks! :)
Hi, I am trying to detect if all the screen are in fullscreen mode.
The current approach is to get all windows' information from CGWindowListCopyWindowInfo and then compare the frame and coordinate with the frame of NSScreen.screens.
However, there is a problem, the y position of the window seems to be relative to the screen. As it is not absolute position, I cannot compare it with the coordinate of the screen.
Does anyone know if there are other information that I can use? Or is there a way to retrieve the absolute position or screen ID from the GCWIndow object?
Hello,
Thank you for attending today’s Metal & game technologies group lab at WWDC25!
We were delighted to answer many questions from developers and energized by the community engagement.
We hope you enjoyed it and welcome your feedback.
We invite you to carry on the conversation here, particularly if your question appeared in Slido and we were unable to answer it during the lab.
If your question received feedback let us know if you need clarification.
You may want to ask your question again in a different lab e.g. visionOS tomorrow.
(We realize that this can be confusing when frameworks interoperate)
We have a lot to learn from each other so let’s get to Q&A and make the best of WWDC25! 😃
Looking forward to your questions posted in new threads.