Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

xCode26.x Metal4 classes do not compile
Hi, I am using xCode26.x. But my Metal4 classes are not compiling. I downloaded the sample code from Apple's website - https://developer.apple.com/documentation/Metal/processing-a-texture-in-a-compute-function. For example, I am getting errors like "Cannot find protocol declaration for 'MTL4CommandQueue'; I have hit a deadline. Any recommendations are very welcome. I have downloaded the Metal Tool chain. When I run the following commands on the terminal - xcodebuild -showComponent metalToolchain ; xcrun -f metal ; xcrun metal --version I get the following response - Asset Path: /System/Library/AssetsV2/com_apple_MobileAsset_MetalToolchain/86fbaf7b114a899754307896c0bfd52ffbf4fded.asset/AssetData Build Version: 17A321 Status: installed Toolchain Identifier: com.apple.dt.toolchain.Metal.32023 Toolchain Search Path: /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded/Metal.xctoolchain/usr/bin/metal Apple metal version 32023.830 (metalfe-32023.830.2) Target: air64-apple-darwin24.6.0 Thread model: posix InstalledDir: /Users/private/Library/Developer/DVTDownloads/MetalToolchain/mounts/86fbaf7b114a899754307896c0bfd52ffbf4fded/Metal.xctoolchain/usr/metal/current/bin
1
0
560
3w
How to configure RealityKit entities for animations on a modular character?
I am currently using RealityKit (perspective camera) to render a character in my swiftUI app. The character has customization such as clothing items and hair and all objects are properly weighted to the rig. The way the model is setup in Blender is like so: Groups of objects that will be swapped (ex: Shoes -> Shoes objects) and an armature. I then export it to usdc with all objects active. This is the resulting entity hierarchy, viewed in Reality Composer Pro: My problem is that when I export with the Armature Modifier applied to the objects, so that animations get exported, the ModelComponent gets flattened to the armature and swapping entities is no longer as simple as removing the entity with the corresponding name. What's the best practice here? Should animation be exported separately and then applied to the skeleton? If so, how is that achieved? I'm not really sure how to proceed here.
1
0
72
May ’25
MacOS Catalina 10.15.7 CoreGraphic.framework not find symbol
I recently needed to develop an application to obtain the window list, which requires Screen Recording permissions. Apple's official documentation mentions using the two functions CGPreflightScreenCaptureAccess and CGRequestScreenCaptureAccess to request permissions. These functions are stated to be available since version 10.15. However, when I used these two functions on a device running macOS 10.15.7, I encountered the errors shown in the attached screenshot. I used the nm tool to inspect the symbols in the CoreGraphics.framework and found that these two functions were not present. Could you help me understand why this is happening?
0
0
87
May ’25
MetalFX for Unity 2022.3.62f3?
Hi, I’m testing Unity’s Spaceship HDRP demo on iPhone 17 Pro Max and iPad Pro M4 (iOS 26.1). Everything renders correctly, and my custom MetalFX Spatial plugin initializes successfully — it briefly reports active scaling (e.g. 1434×660 → 2868×1320 at 50% scaling), then reverts to native rendering a few frames later. Setup: Xcode 16.1 (targeting iOS 18) Unity 2022.3.62f3 (HDRP) Metal backend Dynamic Resolution enabled in HDRP assets and cameras Relevant Xcode console excerpt: [MetalFXPlugin] MetalFX_Enable(True) called. [SpaceshipOptions] MetalFX enabled with HDRP dynamic resolution integration. [SpaceshipOptions] Disabled TAA for MetalFX Spatial. [SpaceshipOptions] Created runtime RenderTexture: 1434x660 [MetalFX] Spatial scaler created (1434x660 → 2868x1320). [MetalFX] Processed frame with scaler. [MetalFXPlugin] Sent RenderTexture (1434x660) to MetalFX. Output target 2868x1320. [SpaceshipOptions] MetalFX target set: 1434x660 [SpaceshipOptions] Camera targetTexture cleared after MetalFX handoff. It looks like HDRP clears the camera’s target texture right after MetalFX submits the frame, which causes it to revert to native rendering. Is there a recommended way to persist or rebind the MetalFX output texture when using HDRP on iOS? Unity doesn’t appear to support MetalFX in the Editor either: Thanks!
0
0
134
Nov ’25
How to use CharacterControllerComponent.
I am trying to implement a ChacterControllerComponent using the following URL. https://developer.apple.com/documentation/realitykit/charactercontrollercomponent I have written sample code, but PhysicsSimulationEvents.WillSimulate is not executed and nothing happens. import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { let gravity: SIMD3<Float> = [0, -50, 0] let jumpSpeed: Float = 10 enum PlayerInput { case none, jump } @State private var testCharacter: Entity = Entity() @State private var myPlayerInput = PlayerInput.none var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) testCharacter = immersiveContentEntity.findEntity(named: "Capsule")! testCharacter.components.set(CharacterControllerComponent()) let _ = content.subscribe(to: PhysicsSimulationEvents.WillSimulate.self, on: testCharacter) { event in print("subscribe run") let deltaTime: Float = Float(event.deltaTime) var velocity: SIMD3<Float> = .zero var isOnGround: Bool = false // RealityKit automatically adds `CharacterControllerStateComponent` after moving the character for the first time. if let ccState = testCharacter.components[CharacterControllerStateComponent.self] { velocity = ccState.velocity isOnGround = ccState.isOnGround } if !isOnGround { // Gravity is a force, so you need to accumulate it for each frame. velocity += gravity * deltaTime } else if myPlayerInput == .jump { // Set the character's velocity directly to launch it in the air when the player jumps. velocity.y = jumpSpeed } testCharacter.moveCharacter(by: velocity * deltaTime, deltaTime: deltaTime, relativeTo: nil) { event in print("playerEntity collided with \(event.hitEntity.name)") } } } } } } The scene is loaded from RCP. It is simple, just a capsule on a pedestal. Do I need a separate code to run testCharacter from this state?
0
0
139
May ’25
CGContextAddArc not respecting clipping rects for open paths
macOS 15.2, MBP M1, built-in display. The following code produces a line outside the bounds of my clipping region when drawing to CGLayers, to produce a clockwise arc: CGContextBeginPath(m_s.outContext); CGContextAddArc(m_s.outContext, leftX + radius, topY - radius, radius, -startRads, -endRads, 1); CGContextSetRGBStrokeColor(m_s.outContext, col.fRed(), col.fGreen(), col.fBlue(), col.fAlpha()); CGContextSetLineWidth(m_s.outContext, width); CGContextStrokePath(m_s.outContext); Drawing other shapes such as rects or ellipses doesn't cause a problem. I can work around the issue by bringing the path back to the start of the arc: CGContextBeginPath(m_s.outContext); CGContextAddArc(m_s.outContext, leftX + radius, topY - radius, radius, -startRads, -endRads, 1); // add a second arc back to the start CGContextAddArc(m_s.outContext, leftX + radius, topY - radius, radius, -endRads, -startRads, 0); CGContextSetRGBStrokeColor(m_s.outContext, col.fRed(), col.fGreen(), col.fBlue(), col.fAlpha()); CGContextSetLineWidth(m_s.outContext, width); CGContextStrokePath(m_s.outContext); But this does appear to be a bug.
1
0
526
Jan ’25
Animations for streaming
We have a macOS app (not yet released, but in use by ourselves), that provides scoreboards for streaming sport events. Today it is expected, that there are nice animations for goals, etc. We are streaming using NDI, which requires a CVPixelBuffer for each frame. We currently create these animations using CABasicAnimation, CAAnimation and CAKeyframeAnimation. In addition we use ScreenCaptureKit to generate the frames. This works fine with 25/30 fps, as long as the window where our animations are performed in is visible. But this is not what it should be. We have a smaller window as main app window and control display performing the animations in reduced size, while the streaming animations need to be in HD format and later maybe in 4K. When using an offscreen window, the animations are not calculated. We get 1 frame per second or so. So we actually have to connect an external display to the MacBook and open the large windows there. Ugly solution. Do we use a completely wrong approach? Or is there a way to tell the macOS to perform the animations although it is an offscreen window? If it cannot work that way, what is an alternative?
0
0
137
May ’25
Why does game mode not get triggered for my App?
I think I really have tried everything and I did all according to official documentation to support game mode on iOS or iPadOS but it doesn't matter what I do it just doesn't get triggered. Funny enough it works during development when I install it via Xcode but as soon as it is live on the store and when I install it from there game mode doesn't get triggered anymore. What I have atm I have added (even though it is deprecated) <key>GCSupportsGameMode</key> <true/> I have set the (but it seems only supported for macOS) <key>LSApplicationCategoryType</key> <string>public.app-category.games</string> I have added <key>LSSupportsGameMode</key> <true/> It just doesn't work. Is there anything else what needs to be done? Should the flag LSSupportsGameMode not be enough normally? The reason why this is so annoying is that my app is a real time streaming app and I want to profit from minimised background activities for smoother gameplay and more consistent frame rates like mentioned in the documentation.
1
0
442
3w
SCNTechnique clearColor Always Shows sceneBackground When Passes Share Depth Buffer
Problem Description I'm encountering an issue with SCNTechnique where the clearColor setting is being ignored when multiple passes share the same depth buffer. The clear color always appears as the scene background, regardless of what value I set. The minimal project for reproducing the issue: https://www.dropbox.com/scl/fi/30mx06xunh75wgl3t4sbd/SCNTechniqueCustomSymbols.zip?rlkey=yuehjtk7xh2pmdbetv2r8t2lx&st=b9uobpkp&dl=0 Problem Details In my SCNTechnique configuration, I have two passes that need to share the same depth buffer for proper occlusion handling: "passes": [ "box1_pass": [ "draw": "DRAW_SCENE", "includeCategoryMask": 1, "colorStates": [ "clear": true, "clearColor": "0 0 0 0" // Expecting transparent black ], "depthStates": [ "clear": true, "enableWrite": true ], "outputs": [ "depth": "box1_depth", "color": "box1_color" ], ], "box2_pass": [ "draw": "DRAW_SCENE", "includeCategoryMask": 2, "colorStates": [ "clear": true, "clearColor": "0 0 0 0" // Also expecting transparent black ], "depthStates": [ "clear": false, "enableWrite": false ], "outputs": [ "depth": "box1_depth", // Sharing the same depth buffer "color": "box2_color", ], ], "final_quad": [ "draw": "DRAW_QUAD", "metalVertexShader": "myVertexShader", "metalFragmentShader": "myFragmentShader", "inputs": [ "box1_color": "box1_color", "box2_color": "box2_color", ], "outputs": [ "color": "COLOR" ] ] ] And the metal shader used to display box1_color and box2_color with splitting: fragment half4 myFragmentShader(VertexOut in [[stage_in]], texture2d<half, access::sample> box1_color [[texture(0)]], texture2d<half, access::sample> box2_color [[texture(1)]]) { half4 color1 = box1_color.sample(s, in.texcoord); half4 color2 = box2_color.sample(s, in.texcoord); if (in.texcoord.x < 0.5) { return color1; } return color2; }; Expected Behavior Both passes should clear their color targets to transparent black (0, 0, 0, 0) The depth buffer should be shared between passes for proper occlusion Actual Behavior Both box1_color and box2_color targets contain the scene background instead of being cleared to transparent (see attached image) This happens even when I explicitly set clearColor: "0 0 0 0" for both passes Setting scene.background.contents = UIColor.clear makes the clearColor work as expected, but I need to keep the scene background for other purposes What I've Tried Setting different clearColor values - all are ignored when sharing depth buffer Using DRAW_NODE instead of DRAW_SCENE - didn't solve the issue Creating a separate pass to capture the background - the background still appears in the other passes Various combinations of clear flags and render orders Environment iOS/macOS, running with "My Mac (Designed for iPad)" Xcode 16.2 Question Is this a known limitation of SceneKit when passes share a depth buffer? Is there a workaround to achieve truly transparent clear colors while maintaining a shared depth buffer for occlusion testing? The core issue seems to be that SceneKit automatically renders the scene background in every DRAW_SCENE pass when a shared depth buffer is detected, overriding any clearColor settings. Any insights or workarounds would be greatly appreciated. Thank you!
0
0
147
Jun ’25
Metal recommendedMaxWorkingSetSize vs actual RAM on iPhone (LLM load fails)
Context I’m deploying large language models on iPhone using llama.cpp. A new iPhone Air (12 GB RAM) reports a Metal MTLDevice.recommendedMaxWorkingSetSize of 8,192 MB, and my attempt to load Llama-2-13B Q4_K (~7.32 GB weights) fails during model initialization. Environment Device: iPhone Air (12 GB RAM) iOS: 26 Xcode: 26.0.1 Build: Metal backend enabled llama.cpp App runs on device (not Simulator) What I’m seeing MTLCreateSystemDefaultDevice().recommendedMaxWorkingSetSize == 8192 MiB Loading Llama-2-13B Q4_K (7.32 GB) fails to complete. Logs indicate memory pressure / allocation issues consistent with the 8 GB working-set guidance. Smaller models (e.g., 7B/8B with similar quantization) load and run (8B Q4_K provide around 9 tokens/second decoding speed). Questions Is 8,192 MB an expected recommendedMaxWorkingSetSize on a 12 GB iPhone? What values should I expect on other 2025 devices including iPhone 17 (8 GB RAM) and iPhone 17 Pro (12 GB RAM) Is it strictly enforced by Metal allocations (heaps/buffers), or advisory for best performance/eviction behavior? Can a process practically exceed this for long-lived buffers without immediate Jetsam risk? Any guidance for LLM scenarios near the limit?
0
0
415
Oct ’25
ARView [.showStatistics] doesn't work on Xcode Canvas
Hi, I can't see RealityKit statistics on Xcode Canvas using: arView.debugOptions = [.showStatistics] The statistics only show on a physical device, not Xcode live canvas with #Preview. Testing in Xcode 26.0.1 (17A400) on Tahoe 26.0.1 (25A362). Use case: I'm using RealityKit as a non-AR 3D engine. Xcode Canvas is useful for live iterations. Is this expected behavior? How can I see FPS on Xcode canvas? SKView for example shows all debug options on both Xcode Canvas and physical devices.
0
0
374
Oct ’25
Validation error, when I try to upload a game to Testflight
Hello everyone and thank you for you're time! I got an issue with uploading several icons to testflight. I'm using Game Maker Studio as my engine for the game. This is the error that I'm getting, even when I try to use the old icons for the game, that worked in the past. I tried to transform the icons, using this site "https://makeappicon.com/", but I still got the same validation error. Can you help me fixing the issue - thank you so much!
2
0
560
Oct ’25
Texture Definitions for MPSSVGF Denoise
I am trying to use the SVGF denoiser to denoise my ray traced shadows (and also other textures later). I do get a smoothed image, but with wonky denoising. I need the depth-normal textures and motion textures for the SVGF and assume that these are badly filled in my case. However, neither in the above linked documentation nor in the WWDC19 video I find how they should be defined. I am looking to answers to: Is depth in red or alpha channel for the depth-normal texture? Are the normals in screen space? Is depth linear? Is it distance or z coordinate in view space? Or even logarithmically scaled or something else? Are the motion vectors supposed to be in pixels per frame? What is the orientation of the axis? Is y up or down? Are there are other restrictions on the formats? Also the linked code did not help me (I have not found any SVGF so far; also all the code is in Objective-C++, not Swift, but that's a different topic). So how should I fill these textures. Can someone point me to the documentation where these kinds of questions are answered?
0
0
558
Dec ’24
ScreenCaptureKit recording output is corrupted when captureMicrophone is true
Hello everyone, I'm working on a screen recording app using ScreenCaptureKit and I've hit a strange issue. My app records the screen to an .mp4 file, and everything works perfectly until the .captureMicrophone is false In this case, I get a valid, playable .mp4 file. However, as soon as I try to enable the microphone by setting streamConfig.captureMicrophone = true, the recording seems to work, but the final .mp4 file is corrupted and cannot be played by QuickTime or any other player. This happens whether capturesAudio (app audio) is on or off. I've already added the "Privacy - Microphone Usage Description" (NSMicrophoneUsageDescription) to my Info.plist, so I don't think it's a permissions problem. I have my logic split into a ScreenRecorder class that manages state and a CaptureEngine that handles the SCStream. Here is how I'm configuring my SCStream: ScreenRecorder.swift // This is my main SCStreamConfiguration private var streamConfiguration: SCStreamConfiguration { var streamConfig = SCStreamConfiguration() // ... other HDR/preset config ... // These are the problem properties streamConfig.capturesAudio = isAudioCaptureEnabled streamConfig.captureMicrophone = isMicCaptureEnabled // breaks it if true streamConfig.excludesCurrentProcessAudio = false streamConfig.showsCursor = false if let region = selectedRegion, let display = currentDisplay { // My region/frame logic (works fine) let regionWidth = Int(region.frame.width) let regionHeight = Int(region.frame.height) streamConfig.width = regionWidth * scaleFactor streamConfig.height = regionHeight * scaleFactor // ... (sourceRect logic) ... } streamConfig.pixelFormat = kCVPixelFormatType_32BGRA streamConfig.colorSpaceName = CGColorSpace.sRGB streamConfig.minimumFrameInterval = CMTime(value: 1, timescale: 60) return streamConfig } And here is how I'm setting up the SCRecordingOutput that writes the file: ScreenRecorder.swift private func initRecordingOutput(for region: ScreenPickerManager.SelectedRegion) throws { let screeRecordingOutputURL = try RecordingWorkspace.createScreenRecordingVideoFile( in: workspaceURL, sessionIndex: sessionIndex ) let recordingConfiguration = SCRecordingOutputConfiguration() recordingConfiguration.outputURL = screeRecordingOutputURL recordingConfiguration.outputFileType = .mp4 recordingConfiguration.videoCodecType = .hevc let recordingOutput = SCRecordingOutput(configuration: recordingConfiguration, delegate: self) self.recordingOutput = recordingOutput } Finally, my CaptureEngine adds these to the SCStream: CaptureEngine.swift class CaptureEngine: NSObject, @unchecked Sendable { private(set) var stream: SCStream? private var streamOutput: CaptureEngineStreamOutput? // ... (dispatch queues) ... func startCapture(configuration: SCStreamConfiguration, filter: SCContentFilter, recordingOutput: SCRecordingOutput) async throws { let streamOutput = CaptureEngineStreamOutput() self.streamOutput = streamOutput do { stream = SCStream(filter: filter, configuration: configuration, delegate: streamOutput) // Add outputs for raw buffers (not used for file recording) try stream?.addStreamOutput(streamOutput, type: .screen, sampleHandlerQueue: videoSampleBufferQueue) try stream?.addStreamOutput(streamOutput, type: .audio, sampleHandlerQueue: audioSampleBufferQueue) try stream?.addStreamOutput(streamOutput, type: .microphone, sampleHandlerQueue: micSampleBufferQueue) // Add the file recording output try stream?.addRecordingOutput(recordingOutput) try await stream?.startCapture() } catch { logger.error("Failed to start capture: \(error.localizedDescription)") throw error } } // ... (stopCapture, etc.) ... } When I had the .captureMicrophone value to be false, I get a perfect .mp4 video playable everywhere, however, when its true, I am getting corrupted video which doesn't play at all :-
0
0
273
Nov ’25
Building Game Porting Toolkit on Sequoia
Like many folks here, I've recently attempted to build Apple's Game Porting Toolkit on my machine and ran into compiler errors, but instead of going the usual route of downloading the prebuilt package (kindly provided by GCenX), I decided to see if I could force it to build (since it was obviously buildable at some point). Down below is the list of things I had to do to make it work. Disclaimer: There are several dirty hacks I had to attempt to force the system to do what I needed. Use at your own risk. Don't forget to run all brew commands from a Rosetta prompt: arch -x86_64 zsh Install openssl This one is easy. Just run brew tap rbenv/tap brew install rbenv/tap/openssl@1.1 Install Command Line Tools 15.1 This specific version is required since newer versions come with the linker that is not compatible with the custom compiler (game-porting-toolkit-compiler) that GPTK is using. However, by default 15.1 tools won't install on Sequoia since the installer complains that macOS version is too new. Obviously, Apple has their reasons to not allow this, but all we need is a compiler which should be mostly indifferent to the OS version. To trick the installer, we need to change OS requirement of the installer package. You can do it in four easy steps: Copy Command Line Tools.pkg from the mounted Command_Line_Tools_for_Xcode_15.1.dmg to some other directory. Expand the installer package: pkgutil --expand "Command Line Tools.pkg" CLT You might be prompted to install Command Line Tools when you call pkgutil, just install any version. Go to the newly created CLT folder and edit the Distribution file (it may appear as executable but it's just an xml). You would want to change allowed-os-versions to something greater than 15. Removing this section altogether might also work. When done, re-wrap the package: pkgutil --flatten CLT "Command Line Tools 2.pkg" Congratulations, now you should be able to install 15.1 tools on your OS! If you had to install newer Command Line Tools for pkgutil, delete them before installing 15.1: sudo rm -rf /Library/Developer/CommandLineTools Next step is to make Homebrew accept the outdated 15.1 tools, as by default it'll complain that they're outdated or corrupted. To shut it up, open /usr/local/Homebrew/Library/Homebrew/extend/os/Mac/diagnostic.rb and remove references to check_if_supported_sdk_available from a couple of fatal build check collections. Note - by default, Homebrew will auto-update on any invocation, which will overwrite any changes you've made to its internals. To disable this behavior, before running any brew commands in the terminal, run export HOMEBREW_NO_AUTO_UPDATE=1 After these manipulations, Homebrew might still complain about outdated Command Line Tools, but it won't be a fatal error anymore. Finally, we need to downgrade MinGW to 11.0.1, since the latest version spits out compiler errors when compiling Wine. Unfortunately, Homebrew does a bad job tracking versions of MinGW, so there is no automatic way to do it. Instead, you have to manually download and install old MinGW 11.0.1 formula from the Homebrew git repository. I used the commit from Sep 16, 2023: https://github.com/Homebrew/homebrew-core/blob/b95f4f9491394af667943bd92b081046ba3406f2/Formula/m/mingw-w64.rb Download the file above, save it in your current working directory, and then run brew install ./mingw-w64.rb If you had a newer version of MinGW already installed from the previous build attempts, you can unlink it before installing the one above: brew unlink mingw-w64 With Command Line Tools 15.1 and MinGW 11.0.1 you should now be able to build GPTK without errors: brew -v install apple/apple/game-porting-toolkit In the end, steps above worked for me, although more things could break in the future. I'm leaving the instructions here just to show that it's still possible to build GPTK manually instead of relying on third parties, but with all the hoops I had to jump through I can't really recommend it.
1
0
711
Mar ’25