Hi everyone,
I'm not an experienced developer. I'm interested in the low-latency related APIs in UIUpdateLink, but I failed to write even a minimal demo that works.
UIUpdateInfo.isImmediatePresentationExpected is always false here. My understanding must be wrong. I've totally no idea so I'm asking for help here. I appreciate anyone who gives suggestions of any kind.
Here's my (failed) demo about tracking touch inputs (of the 1st finger) and draw some shape at that place:
import UIKit
class ContentUIView: UIView {
// MARK: - About UIUpdateLink and drawing
required init?(coder: NSCoder) {
super.init(coder: coder)
initializeUpdateLink()
}
override init(frame: CGRect) {
super.init(frame: frame)
initializeUpdateLink()
}
private func initializeUpdateLink() {
self.updateLink = UIUpdateLink(view: self)
self.updateLink.addAction(to: .beforeCADisplayLinkDispatch,
target: self,
selector: #selector(update))
self.updateLink.wantsImmediatePresentation = true
self.updateLink.isEnabled = true
}
@objc func update(updateLink: UIUpdateLink,
updateInfo: UIUpdateInfo) {
print(updateInfo.isImmediatePresentationExpected) // FIXME: Why always false?
CATransaction.begin()
defer { CATransaction.commit() }
layer.setNeedsDisplay()
layer.displayIfNeeded()
}
override func draw(_ rect: CGRect) {
// FIXME: Any way to support opacity?
guard let context = UIGraphicsGetCurrentContext() else { return }
context.clear(rect)
guard let lastTouch = self.lastTouch else { return }
let location = lastTouch.location(in: self)
let circleBounds = CGRect(x: location.x - 16, y: location.y - 16, width: 32, height: 32)
context.setFillColor(.init(red: 1/2, green: 1/2, blue: 1/2, alpha: 1))
context.addLines(between: [])
context.fillEllipse(in: circleBounds)
}
// MARK: - Touch input
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
super.touchesBegan(touches, with: event)
guard lastTouch == nil else { return }
lastTouch = touches.first
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
super.touchesEnded(touches, with: event)
guard let lastTouch, touches.contains(lastTouch) else { return }
self.lastTouch = nil
}
override func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent?) {
self.touchesEnded(touches, with: event)
}
private var lastTouch: UITouch?
private var updateLink: UIUpdateLink!
}
#Preview { ContentUIView() }
Anyway, I'm not meant to find alternative APIs and I'd be willing to know what it can't do.
Graphics and Games
RSS for tagBuild captivating gaming experiences for Apple platforms.
Posts under Graphics and Games tag
44 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I'm experiencing a specific issue where when using any of the MacOS 26 Tahoe betas with Low Power Mode enabled and using Vsync in fullscreen, my application framerate gets limited to a hard 30 fps. I have not experienced this on any older OS. For example Low Power Mode on 13.6 Ventura with Vsync fullscreen lets my application run at full 60 fps without issues.
Is this a bug or a change in behavior of Low Power Mode on Tahoe?
My application is 3D, runs at 60 fps and is sensitive to tearing, so I need Vsync and it is mostly utilized in fullscreen. And Low Power Mode is a default for many Macs, so default experience on Tahoe currently is a halved 30 fps. However there also seems to be inconsistencies of on which machines this happens, but older OSes are always fine.
I am using Apple's original Lightning Digital AV-adapter (Lightning-to-HDMI dongle) to connect my iPhone to an external display via a HDMI cable.
I need to synchronize rendering with the external display's refresh rate, so I create a new CADisplayLink tied to the external display's UIScreen: UIScreen.screens[externalDisplayIdx].displayLink(withTarget:, selector:).
The callback is being called regularly, but with increasing delay relative to the CADisplayLink.timestamp, so the next time the callback is called, I have less and less time to draw the next frame (see the snippet below).
Assuming 60 FPS, the value of secondsTillDeadline starts at an arbitrary value in the range of approx -0.0001 to 0.0166667, and then it slowly decreases towards zero (and for a brief period it goes into small negative numbers). Once it reaches zero, it flips back to 0.0166667 and continues to decrease again. This cycle repeats indefinitely.
Changing the external display's resolution (UIScreen's mode) or the CADisplayLink's preferredFrameRateRange to a lower FPS does not seem to have any effect on the temporal drifting (even the rate of change seem to be the same).
When I create a new CADisplayLink for the iPhone's main screen, the value of secondsTillDeadline is stable, it does not drift and it is very close to 0.0166667, as expected.
Is this drift caused by the external monitor or by Apple's Lightning-to-HDMI dongle ...or is the problem somewhere else?
Can the drifting be stopped?
func onDisplayLinkUpdate(displayLink: CADisplayLink) {
// Gradually decreases from 0.01667 to -0.0001, then flips back to 0.01667 and continues to decrease
let secondsTillDeadline = displayLink.targetTimestamp - CACurrentMediaTime()
}
On an iPad running iPadOS 26 beta 4, when tapping the Game Center Access Point, the overlay doesn’t show the configured achievements, leaderboards or challenges.
I should specify this is an in-development app and the achievements and leaderboards are in the “Not Live” state, however they show on other devices running iOS 18 in the Access Point UI.
Anyone else having this issue? If so, how should I test achievements and leaderboards while iOS 26 beta is out?
The UI looks like this on iPadOS 26:
So I'm testing a microapp that is contained in an IPFS folder. I use a web3 website that is used to view NFTs and their IPFS files. The app has gyro controls, which are enabled through a confirmation gesture.
In iOS 18.5, when I press "Request Permission" button I get the popup to allow the app to acess movement and orientation. In iOS26, pressing the button does nothing. Keep in mind that this only happens through the website, that uses iframes. When I load the IPFS file from a direct link, the popup appears with no issue.
I think this might be because iOS26 uses WebGPU or it might be a bug since iOS26 is still in beta.
I've had no issue calling image files in my .swift files, but they are causing crashes when used in my .SKS files. When I set a sprite texture to an image in the inspector/ editor bar, at runtime when that sprite is being called I get the error: "Cannot get value with size 16. The type encoded as {CGRect={CGPoint=dd}{CGSize=dd}} is expected to be 32 bytes." From my research it has something to do with Apple switching from 32 to 64 bite machines. From chatGPT “SpriteKit under the hood uses NSKeyedUnarchiver to load your .sks file. That unarchiver decodes each archived property by reading a fixed‑size blob of bytes and mapping it into a C struct. In your case it ran into a mismatch”. I am using a 64-bite machine to write my code and 64-bite simulators and physical devices, so there isn't a clear cause of the mismatch. My scenes play fine in Xcode 16's preview window and my code builds, it just crashes at runtime.
When I don’t use image textured assets in the SKS file it works fine. It loads animated labels, and plain color squares. I’ve been able to work around this for static things like a sprite with a background texture by. in a normal non-game swift file, writing code like:
if let scene = SKScene(fileNamed: "GameScene2") {
let bg = SKSpriteNode(imageNamed: "YourBackgroundImage")
bg.position = CGPoint(x: scene.frame.midX, y: scene.frame.midY)
bg.zPosition = -1
scene.addChild(bg)
}
The issue now is I want to make a particle emitter and other non static sprites, but my understanding of their properities isn’t deep enough to create them without the editor. Also when I set SKTexture in a swift file that causes the same runtime crash with the 16/32 error. Could you help me figure out how to fix the bug so I can use the editor again? Otherwise could you help me figure out how to write a workaround like I do for background images? I have a feeling the answer is in writing my own NSKeyedUnarchiver but I don’t know how to make sure it’s called instead of the default one. I've already tried cleaning my code multiple times and deleting and reading sprite nodes. Thank you.
Hey all — I’ve been building out my first set of Game Center Achievements for a game I’m working on, and I’ve run into something odd with the image quality.
The specs say to upload icons at 512x512 or 1024x1024@2x. I’ve been uploading 1024x1024 PNGs (without explicitly naming them “@2x” since there’s only one upload slot), assuming that Game Center would just handle the scaling automatically — kind of like how a lot of things are getting more streamlined across platforms lately.
But in testing, the icons are showing up a bit blurry, especially in the Game Center interface. It’s not horrible, but it’s definitely softer than I expected — more like low-res than Retina.
All my test devices (outside the Simulator) are running iOS 26, so I’m also wondering if this might be a beta-related display bug?
Has anyone else run into this? Curious if I’m missing a best practice here, or if I really do need to ensure I’m uploading it with the @2x suffix, or maybe something else entirely?
Thanks!
Hello Apple Developers i am here write down my experience with the IOS 26 Beta
first off i would like to say that i kind of do/don't like the new iquid glass UI/UIX Designs in some parrts of the ios like in m ust 3rd Party Apps like Uber Lyrith MJ Access Link Moblie app DoorDash VLC And Apple Music App just to name of few
since i have installed the beta i have ran into a few bugs i have alread sent to the feeback app via iphone but i'm going to write them here as will i'm not looking for troubleshoots or tech support i'm just shareing my experience with the apple community and the Apple Development/Enginer Team to fine tone for release Time
please note that i am a user with Vision impairment so please by respect to me due to my writteing issues and grammer and spelling
so here i go my first bug that i ran into on the first day was when i was listening to some music trakcs in the apple muisc app when scrolling down or up fast the app will froze for mill secend then contiune as noraml
my second bug that i ran into dureing music play was with my Crossfade settings not working on some tracks im not sure if this is due to BMP alignment or AI algorithm integration with in the software itseif but for me this takes me out of the listening experince that i have when i enjoy listening to music
My suggestion Move the AutoMix and Crossfade Settings in to the Apple Music its seif and give the user more controll over how long or how short they want the crossfade or autmix to happen dureing the ending of each tracked play also for the cross fade option is set at 12 increase this to 30 secs or more if possiable or add an BPM options for the Automix to mix in the next track via BMP for simple of my rock track is at 148 BMP the next track should be pop or kpop or rap synceing up with that same BMP speed or similar at 148 BMP my next suggestion for the Apple Music App Shameless track mode (this mode to can be Incorporated) in to the Automix Featrue this redue some music tracks that ends abruptly some MP3 tracks added outside of the Apple Music App seems to broke dureing playback
My 3rd bug that i ran in to with the Glass UI for controll Center like i stated before i am Vision impaired with the clear Glass over lapping the current UI for me this hard for me to tell what icons i am looking atside from the Voolum and Brightness Bars i am asking please make this more dark theme and make the icons brigher or add name undernearth the icons or Flip the Dark them or dan the current UI over lapping the Controlor center or add White colors with Black Arrows/icons for all Apple App that has this Glass UI in side of cause this is driving me nuts
My 4th Bug that i ran into was with the lock screen/restart/reboot ohh girl where do i began with this one let's with the notifications i don't know who through it was a good idea to have a Clear Bright UI over lapping the Notifications this is very annoying via imessage Texting cause my custom wallpaper Blends in with a white background and this is Worst
My suggestion for this is very simple darkering the background on the lock screen abit more so the text is more reader or increase the Notification bars (this is for users like myseif that use Dark mode)
My 5th Bug involves my Back ups/Restore/Corrupated < is seif explanatory when i tryed to downgrand back to Version 18.5/18.6 nothing happened so please fix this or make it a bit more easyer for users to be able to back up/Retore their Devices now i has to wait until (Tomttow morning Friday) to factory reset my phone
my conclusion since Beta Users and Developers and Engiers are Still testing please take look at my suggestion and try to bring not all but some of them in to public Release
Thank You!
Update i would like to Downgrad from IOS 26 Developer Beta back to IOS 18.5
I'm an experienced SceneKit developer and I want to begin work on a new project using RealityKit. So I appreciated as timely, the WWDC 2025 Session, "Bring your SceneKit project to RealityKit".
However, now I am finding that:
Blender does not properly support exporting armatures in usdc files, and usdc is really the only file format that should be used for creating 3D assets for RealityKit.
The option of exporting from Blender to fbx or some other intermediate format, and then converting that to usdc, is a challenge.
Apple's Reality Converter App, which supposedly can support importing and converting fbx files to usdc, is no longer available from Apple's website. And an older copy of it I found at the Kodeco website requires Rosetta on Apple Silicon. As well, this older copy does not in fact import fbx or anything else - I find it doesn't work at all.
Apple's Reality Composer Pro, at least as far as I can tell, only supports importing usdc - it is not a file conversion tool.
Alternatively, I am under the impression that Maya supports producing usdc files with armatures, but Maya costs over $2000 per year and I am skilled with Blender, so I believe strongly that I should be able to continue with Blender. Maya's expense and skillset simply shouldn't be a requirement for building RealityKit applications.
What are my options then, if any, to produce assets with armatures and armature based animations using Blender, and then bring them into RealityKit?
Imagine a native macOS app that acts as a "launcher" for a Java game.** For example, the "launcher" app might use the Swift Process API or a similar method to run the java command line tool (lets assume the user has installed Java themselves) to run the game.
I have seen How to Enable Game Mode. If the native launcher app's Info.plist has the following keys set:
LSApplicationCategoryType set to public.app-category.games
LSSupportsGameMode set to true (for macOS 26+)
GCSupportsGameMode set to true
The launcher itself can cause Game Mode to activate if the launcher is fullscreened. However, if the launcher opens a Java process that opens a window, then the Java window is fullscreened, Game Mode doesn't seem to activate. In this case activating Game Mode for the launcher itself is unnecessary, but you'd expect Game Mode to activate when the actual game in the Java window is fullscreened.
Is there a way to get Game Mode to activate in the latter case?
** The concrete case I'm thinking of is a third-party Minecraft Java Edition launcher, but the issue can also be demonstrated in a sample project (FB13786152). It seems like the official Minecraft launcher is able to do this, though it's not clear how. (Is its bundle identifier hardcoded in the OS to allow for this? Changing a sample app's bundle identifier to be the same as the official Minecraft launcher gets the behavior I want, but obviously this is not a practical solution.)
Hello,
Thank you for attending today’s Metal & game technologies group lab at WWDC25!
We were delighted to answer many questions from developers and energized by the community engagement.
We hope you enjoyed it and welcome your feedback.
We invite you to carry on the conversation here, particularly if your question appeared in Slido and we were unable to answer it during the lab.
If your question received feedback let us know if you need clarification.
You may want to ask your question again in a different lab e.g. visionOS tomorrow.
(We realize that this can be confusing when frameworks interoperate)
We have a lot to learn from each other so let’s get to Q&A and make the best of WWDC25! 😃
Looking forward to your questions posted in new threads.
A few users have recently reported no longer being able to capture point clouds using our app, specifically on iPhone 15 Pro devices. We recently found an in-house device that exhibits this behavior and found that the confidenceMap contains only low confidence values, regardless of the environment being captured. Our app uses a higher confidence threshold; setting the threshold to a lower value produces noisy results as expected, so that is a non-viable option.
Other LiDAR based apps have been tested with this device and the results are the same. No points, or noisy point clouds in apps that allow a lower confidence threshold setting. On devices that exhibit this behavior the "Displaying a point cloud using scene depth" Apple sample app can be used to visualize the issue.
First reports of this new behavior occurred as early as iOS 18.4.
Looking for recommendations on which team(s) at Apple to reach out to with these findings since the behavior manifests on only a small sample of devices.
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
i am trying to build some projects please check out my project
hi
When analyzing our game using Instruments, I've always been confused about the two items "Drawable Present" and "Drawable Presented" in the GPU column. The timing of Drawable Present seems to be when the CPU layer calls commandbuffer:present, rather than when the actual encoding is completed on the GPU. Also, what does drawable presented specifically mean? In our case, when a CPU stall occurs, it appears that the vsync interval changes in the next frame, and a surface that has already been calculated is not displayed. Why is this happening?
I was trying to move from appkit to swiftUI. As a learning project I am building a cellular automata style project based on Pattersons Worms.. I am trying something similar to the EA game Worms? for the Commodore 64. There is a video on YouTube of the game running, but I'm not allowed to link it here.
The problem I have is that the animation is driven by a ruleset. When the automata hits a configuration that is not in the ruleset it is supposed to stop and ask the user. For each step the model returns either the next move, or nil to indicate the user need to make a choice that will be sent back to the model to be added to the ruleset.
My current approach, and I might be following the wrong path, is a ZStack where the bottom level is the grid, the middle level is the established worm segments and the top level is either the animation of the next worm segment or the user chooser to choose the segment. I've only implemented the animation of the next worm segment. The idea is that when the model adds a segment that it first animated at the top level and then displayed by the middle level. Then the top level animates the next segment. I was animating the trim on the segment to draw the line.
If the current move is nil, then the middle level draws the segment. If current move has a value, the animation draw it, and then on completion sets the current move to nil so that the bottom level draw it.
The problem I ran into was resetting the animation to draw the next segment. I've tried two approaches. in one the completion resets the animation boolean variable, but I need a manual step to set the next stage of the animation. The other uses the completion to set the next step, but it the animation doesn't run for the step and the display is an always a step behind. I'm not sure how to both update the move and reset the animation at the same time.
I have uploaded a simplified version without the full grid and simplified model to GitHub (https://github.com/thomasrdean/AnimationTest). Is there any other way to reset the animation the than the completion so I can use the completion to retrieve the next step from the model?
Hi,
seems MSL is missing support for a clock() shader instruction available in other graphics APIs like Vulkan or OpenGL for example..
useful for counting cost in number of clock cycles of some code insider shader with much finer granularity than launching a micro kernel with same instructions and measuring cycles cost from CPU..
also useful for MoltenVK to support that extensions..
thanks..
What is Game Mode?
Game Mode optimizes your gaming experience by giving your game the highest priority access to your CPU and GPU, lowering usage for background tasks. And it doubles the Bluetooth sampling rate, which reduces input latency and audio latency for wireless accessories like game controllers and AirPods.
See Use Game Mode on Mac
See Port advanced games to Apple platforms
How can I enable Game Mode in my game?
Add the Supports Game Mode property (GCSupportsGameMode) to your game’s Info.plist and set to true
Correctly identify your game’s Application Category with LSApplicationCategoryType (also Info.plist)
Note:
Enabling Game Mode makes your game eligible but is not a guarantee; the OS decides if it is ok to enable Game Mode at runtime
An app that enables Game Mode but isn’t a game will be rejected by App Review.
How can I disable Game Mode?
Set GCSupportsGameMode to false.
Note: On Mac Game Mode is automatically disabled if the game isn’t running full screen.
I can't create any breakpoint in my Xcode after I upgraded to macOS 15.4
macOS: Version 15.4 (24E248)
visionOS Simulator: 2.3
Xcode: Version 16.2 (16C5032a)
My app works well without any breakpoints.
But if I create any breakpoint it shows me this:
Couldn't find the Objective-C runtime library in loaded images.
Message from debugger: The LLDB RPC server has crashed. You may need to manually terminate your process. The crash log is located in ~/Library/Logs/DiagnosticReports and has a prefix 'lldb-rpc-server'. Please file a bug and attach the most recent crash log.
I am develop visionOS app. I am now very interested in Metal and Compositor Services, but I have not explored them in depth. I know that Metal has a higher degree of control freedom. I am wondering if using Compositor Services will have fewer functions than RealityKit in AR technology (such as scene reconstruction and understanding, hover effect, etc.).