Hello,
I have noticed a performance drop on SpriteKit-based projects running on iOS 26.0 (23A341).
Below is a SpriteKit scene used to test framerate on different devices:
import SpriteKit
import SwiftUI
class BareboneScene: SKScene {
override func didMove(to view: SKView) {
size = view.bounds.size
anchorPoint = CGPoint(x: 0.5, y: 0.5)
backgroundColor = .darkGray
let roundedSquare = SKShapeNode(rectOf: CGSize(width: 150, height: 75), cornerRadius: 12)
roundedSquare.fillColor = .systemRed
roundedSquare.strokeColor = .black
roundedSquare.lineWidth = 3
addChild(roundedSquare)
let action = SKAction.rotate(byAngle: .pi, duration: 1)
roundedSquare.run(.repeatForever(action))
}
}
struct BareboneSceneView: View {
var body: some View {
SpriteView(
scene: BareboneScene(),
debugOptions: [.showsFPS]
)
.ignoresSafeArea()
}
}
#Preview {
BareboneSceneView()
}
The scene is very simple, yet framerate drops to ~40 fps as shown by the Metal HUD. Tested on:
iPhone 13, iOS 26.0: framerate drops to 40 fps. Sometimes it runs at near 60fps. But if the screen is touched repeatedly, the framerate drops to 40-50 fps again.
iPhone 11 Pro, iOS 26.0: ~40fps.
iPad 9th Gen, iOS 18.6.2: 60fps, no issues.
See screenshots attached. These numbers were observed by me and members of our beloved SpriteKit Discord server.
Thank you for your attention.
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm working on an application for viewing AMF models on macOS, using RealityKit. AMF supports several different ways to color models, including per-vertex color (where the color of a triangle is interpolated from vertex to vertex) as well as per-face color (where the color of the triangle is the same across the entire face).
I'm trying to figure out how to support those color models using a RealityKit mesh. Apple's documentation (https://developer.apple.com/documentation/realitykit/modifying-realitykit-rendering-using-custom-materials) talks about per-vertex colors, but I haven't found a way to create a mesh that includes per-vertex colors, other than use a texture map (which might be the correct solution).
Can someone give me some pointers?
I just upgraded my macOS, Xcode and Simulator all to the newest beta version 26.
Then I found two issues when building my app with Xcode 26 and running it on simulator 26.
The game center access point no longer shows up in the app. This is how it's configured in the past. And it still works on simulator 18.4
func authenticatePlayer() {
GKAccessPoint.shared.location = .topTrailing
self.localPlayer.authenticateHandler = { viewController, error in
if let viewController = viewController {
// can present Game Center login screen
} else if self.localPlayer.isAuthenticated {
// game can be started
} else {
// user didn't log in, continue the game without game center
}
}
}
After game ended, the leaderboard won't load. This is how it's implemented in the past. It's still working in simulator 18.4
struct GameCenterView: UIViewControllerRepresentable {
@Environment(\.presentationMode) var presentationMode
...
func makeUIViewController(context: Context) -> GKGameCenterViewController {
let viewController = GKGameCenterViewController(
leaderboardID: getLeaderBoardID(with: leaderBoardGameMode),
playerScope: .global,
timeScope: .allTime
)
viewController.gameCenterDelegate = context.coordinator
return viewController
}
func updateUIViewController(_ uiViewController: GKGameCenterViewController, context: Context) {}
func makeCoordinator() -> Coordinator {
Coordinator(self)
}
class Coordinator: NSObject, GKGameCenterControllerDelegate {
let parent: GameCenterView
init(_ parent: GameCenterView) {
self.parent = parent
}
func gameCenterViewControllerDidFinish(_ gameCenterViewController: GKGameCenterViewController) {
parent.presentationMode.wrappedValue.dismiss()
}
}
}
Hello, I am trying to capture screen recording ( output.mp4 ) using ScreenCaptureKit and also the mouse positions during the recording ( mouse.json ). The recording and the mouse positions ( tracked based on mouse movements events only ) needs to be perfectly synced in order to add effects in post editing.
I started off by using the await stream?.startCapture() and after that starting my mouse tracking function :-
try await captureEngine.startCapture(configuration: config, filter: filter, recordingOutput: recordingOutput)
let captureStartTime = Date()
mouseTracker?.startTracking(with: captureStartTime)
But every time I tested, there is a clear inconsistency in sync between the recorded video and the recorded mouse positions.
The only thing I want is to know when exactly does the recording "actually" started so that I can start the mouse capture at that same time, and thus I tried using the Delegates, but being able to set them up perfectly.
import Foundation
import AVFAudio
import ScreenCaptureKit
import OSLog
import Combine
class CaptureEngine: NSObject, @unchecked Sendable {
private let logger = Logger()
private(set) var stream: SCStream?
private var streamOutput: CaptureEngineStreamOutput?
private var recordingOutput: SCRecordingOutput?
private let videoSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.VideoSampleBufferQueue")
private let audioSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.AudioSampleBufferQueue")
private let micSampleBufferQueue = DispatchQueue(label: "com.francestudio.phia.MicSampleBufferQueue")
func startCapture(configuration: SCStreamConfiguration, filter: SCContentFilter, recordingOutput: SCRecordingOutput) async throws {
// Create the stream output delegate.
let streamOutput = CaptureEngineStreamOutput()
self.streamOutput = streamOutput
do {
stream = SCStream(filter: filter, configuration: configuration, delegate: streamOutput)
try stream?.addStreamOutput(streamOutput, type: .screen, sampleHandlerQueue: videoSampleBufferQueue)
try stream?.addStreamOutput(streamOutput, type: .audio, sampleHandlerQueue: audioSampleBufferQueue)
try stream?.addStreamOutput(streamOutput, type: .microphone, sampleHandlerQueue: micSampleBufferQueue)
self.recordingOutput = recordingOutput
recordingOutput.delegate = self
try stream?.addRecordingOutput(recordingOutput)
try await stream?.startCapture()
} catch {
logger.error("Failed to start capture: \(error.localizedDescription)")
throw error
}
}
func stopCapture() async throws {
do {
try await stream?.stopCapture()
} catch {
logger.error("Failed to stop capture: \(error.localizedDescription)")
throw error
}
}
func update(configuration: SCStreamConfiguration, filter: SCContentFilter) async {
do {
try await stream?.updateConfiguration(configuration)
try await stream?.updateContentFilter(filter)
} catch {
logger.error("Failed to update the stream session: \(String(describing: error))")
}
}
func stopRecordingOutputForStream(_ recordingOutput: SCRecordingOutput) throws {
try self.stream?.removeRecordingOutput(recordingOutput)
}
}
// MARK: - SCRecordingOutputDelegate
extension CaptureEngine: SCRecordingOutputDelegate {
func recordingOutputDidStartRecording(_ recordingOutput: SCRecordingOutput) {
let startTime = Date()
logger.info("Recording output did start recording \(startTime)")
}
func recordingOutputDidFinishRecording(_ recordingOutput: SCRecordingOutput) {
logger.info("Recording output did finish recording")
}
func recordingOutput(_ recordingOutput: SCRecordingOutput, didFailWithError error: any Error) {
logger.error("Recording output failed with error: \(error.localizedDescription)")
}
}
private class CaptureEngineStreamOutput: NSObject, SCStreamOutput, SCStreamDelegate {
private let logger = Logger()
override init() {
super.init()
}
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of outputType: SCStreamOutputType) {
guard sampleBuffer.isValid else { return }
switch outputType {
case .screen:
break
case .audio:
break
case .microphone:
break
@unknown default:
logger.error("Encountered unknown stream output type:")
}
}
func stream(_ stream: SCStream, didStopWithError error: Error) {
logger.error("Stream stopped with error: \(error.localizedDescription)")
}
}
I am getting error
Value of type 'SCRecordingOutput' has no member 'delegate'
Even though I am targeting macOs 15+ ( macOs 26 actually ) and macOs only.
What is the best way to achieving the desired result? Is there any other / better way to do it?
Reproduce
Same SIM card with 4G, same testing location, connected to the same server, xcode debugging game applications, network/profile retrotransmitted, Avg round trip to view data
iPhone17, Turn off 4G and turn on WiFi. All the above indicators are acceptable
iPhone17, Turn on 4G, turn off WiFi, retry with retransmission and very high Avg round trip
iPhone14-16, Turn on 4G and turn off WiFi. All the above indicators are acceptable
App
Unity3d project
.netframe4.0
C# Socket
Other
Many developers in Chinese forums have provided feedback on this issue
I typically read an extended gamepad capture() and get all state. But PSVR2 controllers seem to report nothing. So the stick and other buttons don't do anything in a built app. They register as left/right controllers. This on vOS 26, Xcode 26, etc.
They work correctly in the main icon view, although they don't honor inverted vertical and horiztonal scrolling. Both of the default scrolls just feel wrong. When I move left I'm want to scroll level not right. Same for up/down.
Hi,
Since iOS 26 introduced the new Games app, I’ve noticed a problem when using a Nintendo Switch Pro Controller in wired USB-C mode, and also with third-party controllers that emulate it (like the GameSir X5 Lite).
In the Games app interface, only the L/R buttons respond, but the D-Pad and analog sticks don’t work at all. Once inside actual games, the controller works fine — the issue only affects the Games app UI.
What I’ve tested so far:
Xbox / PlayStation controllers → work fine in both wired and Bluetooth, including inside the Games app.
Switch Pro Controller (Bluetooth) → works fine, including in the Games app.
Switch Pro Controller (wired) → same issue as the X5 Lite, D-Pad and sticks don’t work in the Games app.
This makes it hard to use the new Games app launcher with these controllers, even though they work perfectly once a game is launched.
My question: is this an iOS bug (Apple needs to add proper support for wired Switch Pro controllers in the Games app), or something that Nintendo / GameSir would need to address?
Thanks in advance to anyone who can confirm this or provide more info.
I am rewriting an unfinished SceneKit project as RealityKit (NonAR). As far as I can see, RealityKit is missing basic fog functionality?
Fog was simple & easy to implement in SCeneKit (fogStartDistance / fogEndDistance / fogDensityExponent / fogColor). Are there any plans to implement something like this in RealityKit?
Are there any simple workarounds?
Topic:
Graphics & Games
SubTopic:
RealityKit
The title is self-exploratory. I wasn't able to find the CAMetalDisplayLink on the most recent metal-cpp release (metal-cpp_macOS15_iOS18-beta). Are there any plans to include it in the next release?
I'm updating our app to support metal 4, but the metal 4 types don't seem to get recognized when targeting simulator. Is it known if metal 4 will be supported in the near future, or am I setting up the app wrong?
I'm trying to use MTLBinaryArchive. I collected a BinaryArchive from one device and used metal-tt to translate it for all supported iPhone devices, ranging from iPhone 7 Plus to iPhone 16.
However, this BinaryArchive is quite large, around 1.5GB uncompressed, and about 500MB compressed in the IPA. I'm wondering how to address the size issue.
I watched the WWDC 2022 video, which mentioned that the operating system or app installation process would handle compatibility. Does this compatibility support different GPU chips? I tried installing an IPA with a BinaryArchive collected only from an iPhone 12 on an iPhone 13, but the BinaryArchive didn't take effect.
I also saw that Apple supports App Thinning. However, it seems that resources in the Asset Catalog cannot be accessed via URL, and creating an MTLBinaryArchive requires a URL. Is it possible for MTLBinaryArchive to be distributed through App Thinning?
The WWDC 2022 video also mentioned using the -Os optimization flag to reduce size. Can this give an estimate of how much compression it would achieve? Are there any methods to solve the BinaryArchive size issue without impacting performance?
Topic:
Graphics & Games
SubTopic:
Metal
I want to use SwiftUI and RealityView to get AR scene understanding data (ARMeshAnchor) on iOS devices with LiDAR. The only way we can do that is by using ARSession (unless there is another way).
However in previous iOS 18 builds there was this function:
https://developer.apple.com/documentation/realitykit/spatialtrackingsession/run(_:session:arconfiguration:)
, which worked with SpatialTrackingSession and a custom ARSession together. This function in the the latest iOS and Xcode has since been removed in the RealityKit framework but still there on documentation.
I also wanted to get ARFaceAnchor data which I still cannot get without ARSession, the closest I can get is by using:
let target = AnchoringComponent.Target.face
let anchoringComponent = AnchoringComponent(target, trackingMode: .predicted)
entity = Entity()
entity!.components.set(anchoringComponent)
But I still can't find a way to get the current frame (ARFrame) or the anchors ([ARAnchor]) in the view.
Alternatively if I use if I use this function: https://developer.apple.com/documentation/realitykit/spatialtrackingsession/run(_:) and start the ARSession separately. The session (didUpdate and didAdd) only runs for a few frames before getting interrupted.
And if I completely remove SpatialTrackingConfiguration and just run the ARSession. There still is a valid tracked entity for the AnchoringComponent.Target.face component. IF in the configuration for the ARSession I use the ARWorldTrackingConfiguration with face tracking. And I still get updated facial data each frame. But the ARSession didUpdate or didAdd functions don't get called passed the first few frames.
Interestingly if I switch the RealityViewCameraContent.RealityViewCamera to .virtual. I get ARMeshAnchor and ARFaceAnchor data, but no camera feed (as expected). This with or without SpatialTrackingConfiguration.
My overarching question is what is the proper way to access ARMeshAnchors and other ARAnchors created by the system and track them live while also using SwiftUI.
GitHub Repo with sample project can be found here: https://github.com/bpate75/RealityViewTesting
Hello
If you add a ModelEntity to a world inside a portal, the drawing of the model will be occluded properly to the portal bounds.
However the invisible shape of the InputTargetComponent and CollisionComponent are not occluded. They are able to cross the portal, and if you have gestures on your ModelEntity you can trigger them in areas outside the portal bounds. This happens even if the ModelEntity has no PortalCrossingComponent.
Topic:
Graphics & Games
SubTopic:
RealityKit
Apple, please bring back SceneKit.
I can't create any breakpoint in my Xcode after I upgraded to macOS 15.4
macOS: Version 15.4 (24E248)
visionOS Simulator: 2.3
Xcode: Version 16.2 (16C5032a)
My app works well without any breakpoints.
But if I create any breakpoint it shows me this:
Couldn't find the Objective-C runtime library in loaded images.
Message from debugger: The LLDB RPC server has crashed. You may need to manually terminate your process. The crash log is located in ~/Library/Logs/DiagnosticReports and has a prefix 'lldb-rpc-server'. Please file a bug and attach the most recent crash log.
Hello,
I am trying to use the subdivision mesh rendering option.
I can see it working in RealityComposerPro:
But not when loading asset and displaying in Simulator:
Using this code:
import SwiftUI
import RealityKit
import RealityKitContent
struct AirspaceView: View {
// MARK: - VIEW BODY
var body: some View {
RealityView { content in
if let a = try? await Entity(named: "Models/Test/Test.usdc", in: realityKitContentBundle) {
content.add(a)
}
}
}
}
Any ideas why?
For an app of mine I use CGSetDisplayTransferByTable to adjust the gamma table of the device. Since macOS Tahoe, these modifications are silently ignored. The display's actual gamma curve remains unchanged despite the API reporting successful completion.
I've filed a FB for it a few weeks ago, and would love to figure out what could be causing this.
FB18559786
Dear Apple, there is still no support for WebGPU via WebView. Do you have a roadmap on when you plan to support it? That would be very helpful to release our new game.
Topic:
Graphics & Games
SubTopic:
General
When running my game in the Unity Editor on Windows platform I get an error:
DllNotFoundException: GameKitWrapper assembly:<unknown assembly> type:<unknown type> member:(null)
Apple.GameKit.DefaultNSErrorHandler.Init () (at ./Library/PackageCache/com.apple.unityplugin.gamekit@0abcad546f73/Source/DefaultHandlers.cs:35)
This is because GameKitWrapper dynamically linked library is not available under Windows platform.
Besides, "Apple Build Settings" are declared under UNITY_EDITOR_OSX and also not available under Windows platform.
Does anyone managed to solve this?
, it is after update to Xcode 14.3:
[default] CGSWindowShmemCreateWithPort failed on port 0