Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

Request low-latency streaming for iOS/iPadOS
Just found out this key available for visionOS https://developer.apple.com/documentation/bundleresources/entitlements/com.apple.developer.low-latency-streaming It seems to keep video streaming from being interrupted by AWDL, our community needs it badly for self-hosted game streaming (PC to iPhone / iPad). Related apps: Moonlight / VoidLink / SteamLink. Can we expect this on iOS/iPadOS 26, or even iOS/iPadOS 18 ?
1
3
373
Nov ’25
VRAM not freeing in Elite Dangerous
So I've been trying out GPTK with Elite Dangerous Horizons game and it looks like from what I can tell. The VRAM keeps going up until it goes over the limit where it drops the FPS to 1-3 FPS and then crashes the game. From the Performance HUD I can see that it looks like when using GPTK, the VRAM usage just keeps climbing and I never saw it drop down at all. I did some limited testing, and from that I think I can conclude that it is probably not a VRAM leak, but it might be caching it. The reason for this is because I noticed that if I went back to the area that I've been before. It won't increase the VRAM usage. So either there is something wrong with the freeing VRAM memory part, or it could be that GPTK might not be reporting the right amount of VRAM available to use? So maybe that's why it keeps allocating VRAM until it went out of memory and crashed the game. Just to test, I did try running the game with DXVK+MoltenVK combo, and I can see that it works just fine. VRAM is being freed up when it's no longer used. Is this a known issue in some games?
12
3
943
Apr ’25
Showing a MTLTexture on an Entity in RealityKit
Is there any standard way of efficiently showing a MTLTexture on a RealityKit Entity? I can't find anything proper on how to , for example, generate a LowLevelTexture out of a MTLTexture. Closest match was this two year old thread. In the old SceneKit app, we would just do guard let material = someNode.geometry?.materials.first else { return } material.diffuse.contents = mtlTexture Our flow is as follows (for visualizing the currently detected object): Camera-Stream -> CoreML Segmentation -> Send the relevant part of the MLShapedArray-Tensor to a MTLComputeShader that returns a MTLTexture -> Show the resulting texture on a 3D object to the user
5
0
1.1k
Sep ’25
Per-vertex color. in a custom RealityKit mesh? (macOS)
I'm working on an application for viewing AMF models on macOS, using RealityKit. AMF supports several different ways to color models, including per-vertex color (where the color of a triangle is interpolated from vertex to vertex) as well as per-face color (where the color of the triangle is the same across the entire face). I'm trying to figure out how to support those color models using a RealityKit mesh. Apple's documentation (https://developer.apple.com/documentation/realitykit/modifying-realitykit-rendering-using-custom-materials) talks about per-vertex colors, but I haven't found a way to create a mesh that includes per-vertex colors, other than use a texture map (which might be the correct solution). Can someone give me some pointers?
6
2
1.9k
Nov ’25
Metal debug log in Swift Package
My goal is to print a debug message from a shader. I follow the guide that orders to set -fmetal-enable-logging metal compiler flag and following environment variables: MTL_LOG_LEVEL=MTLLogLevelDebug MTL_LOG_BUFFER_SIZE=2048 MTL_LOG_TO_STDERR=1 However there's an issue with the guide, it's only covers Xcode project setup, however I'm working on a Swift Package. It has a Metal-only target that's included into main target like this: targets: [ // A separate target for shaders. .target( name: "MetalShaders", resources: [ .process("Metal") ], plugins: [ // https://github.com/schwa/MetalCompilerPlugin .plugin(name: "MetalCompilerPlugin", package: "MetalCompilerPlugin") ] ), // Main target .target( name: "MegApp", dependencies: ["MetalShaders"] ), .testTarget( name: "MegAppTests", dependencies: [ "MegApp", "MetalShaders", ] ] So to apply compiler flag I use MetalCompilerPlugin which emits debug.metallib, it also allows to define DEBUG macro for shaders. This code compiles: #ifdef DEBUG logger.log_error("Hello There!"); os_log_default.log_debug("Hello thread: %d", gid); // this proves that code exectutes result.flag = true; #endif Environment is set via .xctestplan and valideted to work with ProcessInfo. However, nothing is printed to Xcode console nor to Console app. In attempt to fix it I'm trying to setup a MTLLogState, however the makeLogState(descriptor:) fails with error: if #available(iOS 18.0, *) { let logDescriptor = MTLLogStateDescriptor() logDescriptor.level = .debug logDescriptor.bufferSize = 2048 // Error Domain=MTLLogStateErrorDomain Code=2 "Cannot create residency set for MTLLogState: (null)" UserInfo={NSLocalizedDescription=Cannot create residency set for MTLLogState: (null)} let logState = try! device.makeLogState(descriptor: logDescriptor) commandBufferDescriptor.logState = logState } Some LLMs suggested that this is connected with Simulator, and truly, I run the tests on simulator. However tests don't want to run on iPhone... I found solution running them on My Mac (Mac Catalyst). Surprisingly descriptor log works there, even without MTLLogState. But the Simulator behaviour seems like a bug...
1
1
559
3w
Struggles with attaching a ModelEntity to the skeleton joints of another ModelEntity
In SceneKit, when creating an .scn file from a rigged model, the framework created an SCNNode for each bone/joint, so you could add and remove child nodes directly to and from joints, and like any other SCNNode, you could access world position and world orientation for each joint. The analog would be for joints to be accessible as child entities of a ModelEntity in RealityKit. I am unable to proceed with migrating my project from SceneKit because of this, as there does not seem to be a way to even access the true world position of a joint with the current jointNames/jointTransforms paradigm. The translation information from the given transforms is insufficient to determine the location of a joint at any given time, and other approaches like creating a GeometricPin for the given joint name and attaching it to another entity do not seem to work. So conveniently being able to attach an item to the hand of a rigged model was trivial in SceneKit and now feels impossible in RealityKit. I am not the first person to notice this, and am feeling demoralized about proceeding with RealityKit with such a critical piece of functionality blocked https://stackoverflow.com/questions/76726241/how-do-i-attach-an-entity-to-a-skeletons-joint-in-realitykit Will this be addressed in some way?
5
2
786
Jul ’25
PingFang.ttc font file is missing in iOS 18.0
I'm an iOS developer, and I've been testing our app in iOS 18.0 Beta. I noticed that there's a problem with the font rendering, and after troubleshooting, I've found out that it's caused by the removal of the PingFang.ttc font in 18.0. I would like to ask the reason for removing this font file and which font should be used to display Chinese in the future? My test device is an iPhone 11 Pro and the system version is iOS 18.0 (22A5297). I have also tested Beta 1 and it has the same issue. In previous versions of the system, the PingFang font is located in this directory /System/Library/Fonts/LanguageSupport/PingFang.ttc. But in iOS 18.0, the font file in this directory has become Kohinoor.ttc, and I've tested that this font can't display Chinese either. I traversed the following system font directories and could not find the PingFang.ttc font file. /System/Library/Fonts/AppFonts /System/Library/Fonts/Core /System/Library/Fonts/CoreAddition /System/Library/Fonts/CoreUI /System/Library/Fonts/LanguageSupport /System/Library/Fonts/UnicodeSupport /System/Library/Fonts/Watch Looking for answers, thanks for the help!
9
2
6.2k
Apr ’25
PhotogrammetrySession fails with internal errors 4011 / 4012 when using iOS Object Capture (Area Mode) images
Hi all, I’m running into an issue when trying to reconstruct a 3D model using PhotogrammetrySession on macOS from a set of images captured via the iOS Object Capture sample app, specifically in Area mode. When I attempt to create the model from these images (using the raw Images/ folder exported directly from the capture session), I get the following errors: ERROR cv3dapi.pg: Internal error codes (2): 4011 4012 WARN cv3dapi.pg: Internal warning codes (1): 4507 Output error with code = -15 requestError: CoreOC.PhotogrammetrySession.Error.processError I use the "Images" directory directly exported from Object Capture with my iphone 12 pro max (has lidar) set to "area mode" in the object capture app here is an example heic image metadata from the sequence. heif-info Images/00044.869568833.HEIC MIME type: image/heic main brand: heic compatible brands: mif1, MiHE, MiPr, miaf, MiHB, heic image: 3024x4032 (id=49), primary tiles: 6x8, tile size: 512x512 colorspace: YCbCr, 4:2:0 bit depth: 8 thumbnail: 240x320 color profile: nclx alpha channel: no depth channel: yes size: 192x256 bits per pixel: 8 z-near: 1.173828 z-far: 2.552734 d-min: undefined d-max: undefined representation: uniform Z metadata: Exif: 960 bytes uri /tag:apple.com,2023:ObjectCapture#CameraTrackingState: 4 bytes uri /tag:apple.com,2023:ObjectCapture#CameraCalibrationData: 1015 bytes uri /tag:apple.com,2023:ObjectCapture#ObjectTransform: 48 bytes uri /tag:apple.com,2023:ObjectCapture#ObjectBoundingBox: 48 bytes uri /tag:apple.com,2023:ObjectCapture#RawFeaturePoints: 832 bytes uri /tag:apple.com,2023:ObjectCapture#PointCloudData: 23984 bytes uri /tag:apple.com,2023:ObjectCapture#BundleVersion: 5 bytes uri /tag:apple.com,2023:ObjectCapture#SegmentID: 4 bytes uri /tag:apple.com,2024:ObjectCapture#SessionUUID: 36 bytes uri /tag:apple.com,2024:ObjectCapture#CaptureMode: 4 bytes uri /tag:apple.com,2023:ObjectCapture#Feedback: 4 bytes uri /tag:apple.com,2023:ObjectCapture#WideToDepthCameraTransform: 48 bytes uri /tag:apple.com,2023:ObjectCapture#TemporalDepthPointClouds: 864026 bytes transformations: angle (ccw): 270 region annotations: none properties: camera intrinsic matrix: focal length: 2813.695557; 2813.695557 principal point: 1522.338502; 2002.843018 skew: 0.000000 camera extrinsic matrix: rotation matrix: -0.695 0.344 -0.632 0.007 -0.875 -0.483 -0.719 -0.340 0.606 Questions: • What do internal error codes 4011 and 4012 refer to? • Is there something specific about Area mode captures that require preprocessing before they’re compatible with PhotogrammetrySession? • Has anyone successfully reconstructed a model from an Area mode session using the stock Apple tools? NOTE: I can provide the folder of images for debugging if that would help!
1
2
968
Jul ’25
Is migrating from ARView to RealityView recommended?
We're using RealityKit to create a science education AR app for iOS, iPadOS, and visionOS. In the WWDC25 session video "Bring your SceneKit project to RealityKit" https://developer.apple.com/videos/play/wwdc2025/288 at 8:15, it's explained that when using RealityKit, RealityView should be used in all cases, whereas in the past, SceneKit required SCNView, SceneView, or ARSCNView, depending on an app's requirements. Because the initial development of our app on iOS predates iOS 18's RealityView, our app currently uses ARView to render RealityKit AR content on iOS and iPadOS. Is it recommended that we migrate to RealityView, or can we safely continue using our existing ARView implementation? We'd prefer to avoid unnecessary development cost. If migrating from ARView to RealityView is recommended, what specific benefits should we expect from this transition? Thank you.
1
2
233
Jun ’25
GameKit not working as expected in iOS 26.
I just upgraded my macOS, Xcode and Simulator all to the newest beta version 26. Then I found two issues when building my app with Xcode 26 and running it on simulator 26. The game center access point no longer shows up in the app. This is how it's configured in the past. And it still works on simulator 18.4 func authenticatePlayer() { GKAccessPoint.shared.location = .topTrailing self.localPlayer.authenticateHandler = { viewController, error in if let viewController = viewController { // can present Game Center login screen } else if self.localPlayer.isAuthenticated { // game can be started } else { // user didn't log in, continue the game without game center } } } After game ended, the leaderboard won't load. This is how it's implemented in the past. It's still working in simulator 18.4 struct GameCenterView: UIViewControllerRepresentable { @Environment(\.presentationMode) var presentationMode ... func makeUIViewController(context: Context) -> GKGameCenterViewController { let viewController = GKGameCenterViewController( leaderboardID: getLeaderBoardID(with: leaderBoardGameMode), playerScope: .global, timeScope: .allTime ) viewController.gameCenterDelegate = context.coordinator return viewController } func updateUIViewController(_ uiViewController: GKGameCenterViewController, context: Context) {} func makeCoordinator() -> Coordinator { Coordinator(self) } class Coordinator: NSObject, GKGameCenterControllerDelegate { let parent: GameCenterView init(_ parent: GameCenterView) { self.parent = parent } func gameCenterViewControllerDidFinish(_ gameCenterViewController: GKGameCenterViewController) { parent.presentationMode.wrappedValue.dismiss() } } }
5
2
432
Sep ’25
SpriteKit framerate drop on iOS 26.0
Hello, I have noticed a performance drop on SpriteKit-based projects running on iOS 26.0 (23A341). Below is a SpriteKit scene used to test framerate on different devices: import SpriteKit import SwiftUI class BareboneScene: SKScene { override func didMove(to view: SKView) { size = view.bounds.size anchorPoint = CGPoint(x: 0.5, y: 0.5) backgroundColor = .darkGray let roundedSquare = SKShapeNode(rectOf: CGSize(width: 150, height: 75), cornerRadius: 12) roundedSquare.fillColor = .systemRed roundedSquare.strokeColor = .black roundedSquare.lineWidth = 3 addChild(roundedSquare) let action = SKAction.rotate(byAngle: .pi, duration: 1) roundedSquare.run(.repeatForever(action)) } } struct BareboneSceneView: View { var body: some View { SpriteView( scene: BareboneScene(), debugOptions: [.showsFPS] ) .ignoresSafeArea() } } #Preview { BareboneSceneView() } The scene is very simple, yet framerate drops to ~40 fps as shown by the Metal HUD. Tested on: iPhone 13, iOS 26.0: framerate drops to 40 fps. Sometimes it runs at near 60fps. But if the screen is touched repeatedly, the framerate drops to 40-50 fps again. iPhone 11 Pro, iOS 26.0: ~40fps. iPad 9th Gen, iOS 18.6.2: 60fps, no issues. See screenshots attached. These numbers were observed by me and members of our beloved SpriteKit Discord server. Thank you for your attention.
13
6
2.6k
2w
RealityKit captureHighResolutionFrame from session is broken on iOS26?
A bit of background on what our app is doing: We have a RealityKit ARView session running. During this period we place objects in RealityKit. At some point user can "take photo" and we use session.captureHighResolutionFrame to capture a frame. We then use captured frame and frame.camera.projectPoint to project my objects back to 2D Issue we found is that on devices that have iOS26, first photo user takes and the first frame received from session.captureHighResolutionFrame gives incorrect CGPoint for frame.camera.projectPoint. If user takes the second photo with the same camera phostion, second frame received from session.captureHighResolutionFrame gives correct CGPoint for frame.camera.projectPoint I notices some difference between first and subsequent frames that i believe is corresponding with the issue. Yaw value of camera (frame.camera.eulerAngles.y) on first frame is not correct ( inconsistent with any subsequent frame) I also created a small example app and i followed Building an Immersive Experience with RealityKit example to create it. The issue exists in this app for iOS26, while iOS18.* has consistent values between first and subsequent captured frames. Note: The yaw value seems to differ more if we start session in portrait but take photo in landscape. Example result for 3 captured frames: Frame captured with yaw: 1.4855177402496338 Frame captured with yaw: -0.08803760260343552 Frame captured with yaw: -0.08179682493209839 Example code: class CustomARView: ARView, ARSessionDelegate { required init(frame: CGRect) { super.init(frame: frame) } required init?(coder decoder: NSCoder) { fatalError("init(coder:) has not been implemented")} func setup() { let singleTap = UITapGestureRecognizer(target: self, action: #selector(handleTap)) addGestureRecognizer(singleTap) } @objc func handleTap(_ gestureRecognizer: UIGestureRecognizer) { Task { do { let frame = try await session.captureHighResolutionFrame() print("Frame captured with yaw: \(Double(frame.camera.eulerAngles.y))") } catch { } } } } struct CustomARViewUIViewRepresentable: UIViewRepresentable { func makeUIView(context: Context) -> some UIView { let arView = CustomARView(frame: .zero) arView.setup() return arView } func updateUIView(_ uiView: UIViewType, context: Context) { } } struct ContentView: View { var body: some View { CustomARViewUIViewRepresentable() .frame(maxWidth: .infinity, maxHeight: .infinity) .ignoresSafeArea() } }
3
1
607
Sep ’25
How to attach SwiftUI Views to entities on non-visionOS platforms?
What is the recommended way to attach SwiftUI views to RealityKit entities on macOS, iOS, etc? All the APIs seem to be visionOS only: https://developer.apple.com/documentation/realitykit/realityviewattachments https://developer.apple.com/documentation/realitykit/viewattachmentcomponent https://developer.apple.com/documentation/realitykit/presentationcomponent https://developer.apple.com/documentation/realitykit/imagepresentationcomponent My only idea is to do it "manually" with a ZStack and RealityView somehow? I submitted this as a feedback since it seemed like an oversight: FB18034856.
0
2
130
Jun ’25
Roblox freezing on MacOS 26.1 Beta
After updating to MacOS 26.1 I encountered an issue that Roblox tends to freeze quite often for 10 - 60 seconds at most, this is really annoying that it is doing this as i play the game a lot. My theory is that it is like a driver issue with metal or something, I have reinstalled MacOS, reinstalled the game and lowed the performance manually but nothing is working. Wondering if you could help, when it will be fixed and if others are having the same issue. Many thanks, William.
4
2
816
Oct ’25
Spatial Scene API for iOS Apps
As part of the WWDC25 Keynote, a technology was announced that can present 2D images as 3D spatial scenes. This announcement is supported by a Press Release. ...developers can use the Spatial Scene API to make their app experience even more immersive. Zillow is taking advantage of the API for their Zillow Immersive app, allowing users to see images of homes and apartments with the rich depth and dimension that spatial scenes offer. The feature also appears in the Photos App on iOS 26 Developer Beta 1. Tapping "Spatial Scene" on any photo opens a view of that photo with a parallax effect. I've searched the WWDC sessions and new documentation and have come up short. Reaching out here for help. Is there any documentation for Spatial Scene API? Or any guidance on how to implement the spatial scene in iOS?
1
2
336
Jun ’25
GameKit Access Point causes camera background image in ARKit to be black in iOS 18 beta only
I have an AR game using ARKit with SceneKit that works just fine in iOS 17. In the iOS 18 betas, the AR background image shows black instead of showing the real world. As a result there's no tracking and obviously the whole game is useless. I narrowed down the issue to showing the Game Center Access Point. My app has ViewController 1 (VC1) showing the main menu and that's where I want to show the GC Access Point. From there you open VC2 which shows a list of levels. Selecting any level will open VC3 which has the ARScene. Following is the code I use to start Game Center in VC1: GKLocalPlayer.local.authenticateHandler = { gcAuthVC, error in let isGameCenterReady = (gcAuthVC == nil) && (error == nil) if let viewController = gcAuthVC { self.present (viewController, animated: true, completion: nil) } if error != nil { print(error?.localizedDescription ?? "") } if isGameCenterReady { GKAccessPoint.shared.location = .topLeading GKAccessPoint.shared.showHighlights = true GKAccessPoint.shared.isActive = true } } When switching to VC2 I run GKAccessPoint.shared.isActive = false so that the Access Point will no longer show in any of the following VCs. I tried running it in VC1, VC2, and again in VC3 - it doesn't change anything. Once I reach VC3, the background is black. If in VC1 I don't run GKAccessPoint.shared.isActive = true, so I don't activate the access point, the behavior is as follows: If I wait until after the Game Center login animation completes and closes on its own and then I proceed to VC2 and VC3, the camera image will show correctly If I quickly move to VC2 before the Game Center login animation has completed, so my code will close it by setting active = false, and then I continue to VC3, I will see the black background problem. So it does look like activating the access point and then de-activating it causes the issue. BTW, if I activate the access point and leave it on in all VCs, the same black background issue persists. Other than that, when I'm in VC3 with the black background and I switch to another app (so my game moves to the background), when it returns to the foreground, the camera suddenly shows the real world correctly! I tried to manually reset the AR session by pausing and restarting it, but that didn't change anything. Also, when I check with the debugger, it looks like when the app comes back to the foreground it also doesn't run the session start code. But something does seem to reset itself so I wonder what that is. Maybe I could trigger the same manually in my cdoe??? I repeat that everything works just fine in iOS 17 and below. This problem only started with the iOS 18 beta (currently on beta 5, but it started in some of the previous betas as well). So could this be a bug in iOS 18? As a workaround I could check the iOS version and if it's iOS18 not activate the access point, hoping that the user will not jump to VC2 too quickly, and show my own button which will open Game Center. But I'd rather give the users the full experience with their own avatar and the highlights showing up. Plus, certainly some users will move quickly to VC2 and that will be an awful experience. Any help would be greatly appreciated. Thanks!
11
2
1.1k
Mar ’25
# [CRITICAL] Metal RHI Memory Leak - Resource exhaustion vulnerability (CWE-400) - Bug Report
[CRITICAL] Metal API Memory Leak - Heap Memory Never Released to OS (CWE-400) Security Classification This issue constitutes a resource exhaustion vulnerability (CWE-400): Aspect Details Type Uncontrolled Resource Consumption CWE CWE-400 Vector Local (any Metal application) Impact System instability, denial of service User Control None - no mitigation available Recovery Requires application restart Summary Metal heap allocations are never released back to macOS, even when the memory is entirely unused. This causes continuous, unbounded memory growth until system instability or crash. The issue affects any application using Metal API heap allocation. This was discovered in Unreal Engine 5, but reproduces in a completely blank UE5 project with zero application code - confirming this is Metal framework behavior, not application-level. Environment OS: macOS Tahoe 26.2 Hardware: Apple Silicon M4 Max (also reproduced on M1, M2, M3) API: Metal Reproduction Steps Run any Metal application that allocates and deallocates GPU buffers via Metal heaps Open Activity Monitor and observe the application's memory usage Let the application run idle (no user interaction required) Observe memory growing continuously at ~1-2 MB per second Memory never plateaus or stabilizes Eventually system becomes unstable For testing: Any Unreal Engine 5.4+ project on macOS will reproduce this. Even a blank project with no gameplay code exhibits the leak. (Tested on UE 5.7.1) Observed Behavior Memory Analysis Using Unreal's memreport -full command, two reports taken 86 seconds apart: Metric Report 1 (183s) Report 2 (269s) Delta Process Physical 4373.64 MB 4463.39 MB +89.75 MB Metal Heap Buffer 7168 MB 8192 MB +1024 MB Unused Heap 3453 MB 4477 MB +1024 MB Object Count 73,840 73,840 0 (no change) Key Finding Metal Heap grew by exactly 1 GB while "Unused Heap" also grew by 1 GB. This demonstrates: Metal is allocating new heap blocks in ~1 GB increments Previously allocated heap memory becomes "unused" but is never released The unused memory accumulates indefinitely No application-level objects are leaking (count remains constant) Memory Growth Pattern Continuous growth while idle (no user interaction) Growth rate: approximately 1-2 MB per second No plateau or stabilization occurs Metal allocates new 1 GB heap blocks rather than reusing freed space Eventually leads to system instability and crash What is NOT Causing This We verified the following are NOT the source: Application objects - Object count remains constant Application code - Blank project with no code reproduces the issue Texture streaming - Disabling texture streaming had no effect CPU garbage collection - Running GC has no effect (this is GPU memory) Mitigations Attempted (None Worked) setPurgeableState Setting resources to purgeable state before release: [buffer setPurgeableState:MTLPurgeableStateEmpty]; Result: Metal ignores this hint and does not reclaim heap memory. Avoiding Heap Pooling Forcing individual buffer allocations instead of heap-based pooling. Result: Leak persists - Metal still manages underlying allocations. Aggressive Buffer Compaction Attempting to compact/defragment buffers within heaps every frame. Result: Only moves data between existing heaps. Does NOT release heaps back to OS. Reducing Pool Sizes Minimizing all buffer pool sizes to force more frequent reuse. Result: Slightly slows the leak rate but does not stop it. Root Cause Analysis How Metal Heap Allocation Appears to Work Metal allocates GPU heap blocks in large chunks (~1 GB observed) Application requests buffers from these heaps When application releases buffers, memory becomes "unused" within the heap Metal does NOT release heap blocks back to macOS, even when entirely unused When fragmentation prevents reuse, Metal allocates new heap blocks Result: Continuous memory growth with no upper bound The Core Problem There appears to be no Metal API to force heap memory release. The only way to reclaim this memory is to destroy the Metal device entirely, which requires restarting the application. Expected Behavior Metal should: Release unused heaps - When a heap block is entirely unused, release it back to macOS Respect purgeable hints - Honor setPurgeableState calls from applications Compact allocations - Defragment heap allocations to reduce fragmentation Provide control APIs - Allow applications to request heap compaction or release Enforce limits - Have configurable maximum heap memory consumption Security Implications Local Denial of Service - Any Metal application can exhaust system memory, causing instability affecting all running applications Memory Pressure Attack - Forces other applications to swap to disk, degrading system-wide performance No Upper Bound - Memory consumption continues until system failure Unmitigable - End users have no way to prevent or limit the leak Affects All Metal Apps - Any application using Metal heaps is potentially affected Impact Applications become unstable after extended use System-wide performance degrades as memory pressure increases Users must periodically restart applications Developers cannot work around this at the application level Long-running applications (games, creative tools, servers) are particularly affected Request Investigate Metal heap memory management behavior Implement heap release when blocks become entirely unused Honor setPurgeableState hints from applications Consider providing an API for applications to request heap compaction Document any intended behavior or workarounds Additional Notes This issue has been observed across multiple Unreal Engine versions (5.4, 5.7) and multiple Apple Silicon generations (M1 through M4). The behavior is consistent and reproducible. The Unreal Engine team has implemented various CVars to attempt mitigation (rhi.Metal.HeapBufferBytesToCompact, rhi.Metal.ResourcePurgeInPool, etc.) but none successfully address the issue because the root cause is at the Metal framework level. Tested: January 2026 Platform: macOS Tahoe 26.2, Apple Silicon (M1/M2/M3/M4)
5
2
907
2w
Are there complete code examples available for “Combine Metal 4 machine learning and graphics”?
Hello, I recently watched the WWDC2025 session titled “Combine Metal 4 machine learning and graphics” (https://developer.apple.com/videos/play/wwdc2025/262/ ), and I’m very excited about the new Metal 4 features that integrate machine learning with graphics—such as neural ambient occlusion, shader-based ML inference, and the use of MTLTensor and MTL4MachineLearningCommandEncoder. While the session includes helpful code snippets and a compelling debug demo (e.g., the neural ambient occlusion example), the implementation details are not fully shown, and I haven’t been able to find a complete, runnable sample project that demonstrates end-to-end integration of ML and rendering in Metal 4. Would Apple be able to provide a full, working example—such as an Xcode project—that shows how to: Export a model to an .mlpackage, Convert it to an .mtlpackage, Use MTL4MachineLearningCommandEncoder alongside render passes, Or embed small neural networks directly in shaders using Shader ML? Having such a sample would greatly help developers like me adopt these powerful new capabilities correctly and efficiently. Thank you very much for your time and support! Best regards,
4
2
932
Nov ’25
GestureComponent does not support DragGesture
The following code using the new GestureComponent demonstrates inconsistency. The tap gesture prints output, but the drag gesture does not. I already checked this post, which points to this seemingly outdated sample code I assume that example is deprecated in favour of the now built in version of GestureComponent. Nonetheless, there are no compiler warnings or errors, it just fails silently. TapGesture, LongPressGesture, MagnifyGesture, RotateGesture all work, so this feels like an oversight. RealityView { content in let testEntity = ModelEntity(mesh: .generateBox(size: .init(x: 1, y: 1, z: 1))) testEntity.position = SIMD3<Float>(0,0,-1) testEntity.components.set(InputTargetComponent()) testEntity.components.set(CollisionComponent( shapes: [.generateBox(size: .init(x: 1, y: 1, z: 1))] )) let testGesture = TapGesture() .onEnded { value in print("Tapped") } testEntity.components.set(GestureComponent(testGesture)) let dragGesture = DragGesture() .onEnded { value in print("Dragged") } testEntity.components.set(GestureComponent(dragGesture)) content.add(testEntity) }
3
1
409
Jul ’25