Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

Threadgroup memory for fragment shader
Hello I am trying to get thread group memory access in fragment shader. In essence, I would like to have all the fragments in a tile to bitwiseOR some value. My idea was to use simd_or across the SIMD group, then make each SIMD group thread 0 to atomic or the value into thread group memory. Finally very first thread of the tile would be tasked with writing the value down to texture with write access. Now, I can allocate the thread group memory argument to the fragment function all right. MTLRenderEncoder has setThreadgroupMemoryLength call, which I am using the following way [renderEncoder setThreagroupMemoryLength: 16 offset: 0 atIndex:0] Unfortunately, all I am getting is the following error (runtime assertion) -[MTLDebugRenderCommandEncoder setThreadgroupMemoryLength:offset:atIndex:]:3487: failed assertion Set Threadgroup Memory Length Validation offset + length(16) must be <= threadgroupMemoryLength(0).` What I am doing wrong? How I can get thread group memory in the fragment shader? I know I could use tile shading and compute function but the problem is that here I really like to use fragment stuff. Will be grateful for help.
1
0
99
Apr ’25
The App Store purchase button disappears when another window approaches
Since macOS 15.3.2, we have observed that when another window is moved near the App Store's install button, the button disappears. We have attached a related video in the Feedback submission here https://feedbackassistant.apple.com/feedback/20444423 Our application overlays a transparent, watermark-window on top of the system window, which causes the install button in the App Store to be hidden when a user attempts to install an application.Could you advise on how to avoid this issue?
0
0
154
Nov ’25
Game Center achievements global players percent is 0%.
Hello, I’m the developer of the “stepsquad” app. Our app uses the Game Center achievement feature, but we’ve been encountering a problem: the “Global Players” always shows 0%, even though there are friends who have already achieved these achievements. Initially, I thought it might be because the app was newly launched. However, it’s now been over two months since release, and it’s still showing 0%. If anyone has any insight into this issue, please leave a comment.
1
0
570
Jan ’25
GKLocalPlayer.isUnderAge always returns true on mac with an intel chips
Hello, I'm working on a game that features online multiplayer. The game is developed using Unity and Apple Unity plugins. The "isUnderAge" property restricts the online multiplayer feature. Everything works as expected on all platforms (Mac, iPhone, iPad, AppleTV, and visionPro) except on Macs equipped with an Intel chip. Using the same iCloud and GameCenter, with no restrictions enabled, "isUnderAge" returns false, as expected, but on Mac equipped with an Intel chip, it returns true. Is there any restriction or compatibility issue with those chips? Is there a workaround? Thanks
9
0
883
Nov ’25
Issues with installing Game Porting Toolkit 2.1
I am trying to install the Game Porting Toolkit 2.1 according to the Readme file provided with the toolkit. When I run the following command: WINEPREFIX=~/my-game-prefix brew --prefix game-porting-toolkit/bin/wine64 winecfg I get an error message: zsh: no such file or directory: /usr/local/opt/game-porting-toolkit/bin/wine64 I don't know how to resolve this. When I type in the command which brew , I get the path /usr/local/bin/brew What am I doing wrong?
1
0
282
Apr ’25
RealityKit, Attachments - not working
The simplest realityView (content, attachments in ... causes Contextual closure expects 1 argument but 2 were used in closure body. I have checked every example and i cannot understand why i get this error regardless of any content. Note: i have added Attachment(id: "test") to the attachment closure and get Attachment not is scope. imported both realityKit and SwiftUI.
2
0
189
1w
OCR does not work
Hi, I'm working with a very simple app that tries to read a coordinates card and past the data into diferent fields. The card's layout is COLUMNS from 1-10, ROWs from A-J and a two digit number for each cell. In my app, I have field for each of those cells (A1, A2...). I want that OCR to read that card and paste the info but I just cant. I have two problems. The camera won't close. It remains open until I press the button SAVE (this is not good because a user could take 3, 4, 5... pictures of the same card with, maybe, different results, and then? Which is the good one?). Then, after I press save, I can see the OCR kinda works ( the console prints all the date read) but the info is not pasted at all. Any idea? I know is hard to know what's wrong but I've tried chatgpt and all it does... just doesn't work This is the code from the scanview import SwiftUI import Vision import VisionKit struct ScanCardView: UIViewControllerRepresentable { @Binding var scannedCoordinates: [String: String] var useLettersForColumns: Bool var numberOfColumns: Int var numberOfRows: Int @Environment(.presentationMode) var presentationMode func makeUIViewController(context: Context) -&gt; VNDocumentCameraViewController { let scannerVC = VNDocumentCameraViewController() scannerVC.delegate = context.coordinator return scannerVC } func updateUIViewController(_ uiViewController: VNDocumentCameraViewController, context: Context) {} func makeCoordinator() -&gt; Coordinator { return Coordinator(self) } class Coordinator: NSObject, VNDocumentCameraViewControllerDelegate { let parent: ScanCardView init(_ parent: ScanCardView) { self.parent = parent } func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFinishWith scan: VNDocumentCameraScan) { print("Escaneo completado, procesando imagen...") guard scan.pageCount &gt; 0, let image = scan.imageOfPage(at: 0).cgImage else { print("No se pudo obtener la imagen del escaneo.") controller.dismiss(animated: true, completion: nil) return } recognizeText(from: image) DispatchQueue.main.async { print("Finalizando proceso OCR y cerrando la cámara.") controller.dismiss(animated: true, completion: nil) } } func documentCameraViewControllerDidCancel(_ controller: VNDocumentCameraViewController) { print("Escaneo cancelado por el usuario.") controller.dismiss(animated: true, completion: nil) } func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFailWithError error: Error) { print("Error en el escaneo: \(error.localizedDescription)") controller.dismiss(animated: true, completion: nil) } private func recognizeText(from image: CGImage) { let request = VNRecognizeTextRequest { (request, error) in guard let observations = request.results as? [VNRecognizedTextObservation], error == nil else { print("Error en el reconocimiento de texto: \(String(describing: error?.localizedDescription))") DispatchQueue.main.async { self.parent.presentationMode.wrappedValue.dismiss() } return } let recognizedStrings = observations.compactMap { observation in observation.topCandidates(1).first?.string } print("Texto reconocido: \(recognizedStrings)") let filteredCoordinates = self.filterValidCoordinates(from: recognizedStrings) DispatchQueue.main.async { print("Coordenadas detectadas después de filtrar: \(filteredCoordinates)") self.parent.scannedCoordinates = filteredCoordinates } } request.recognitionLevel = .accurate let handler = VNImageRequestHandler(cgImage: image, options: [:]) DispatchQueue.global(qos: .userInitiated).async { do { try handler.perform([request]) print("OCR completado y datos procesados.") } catch { print("Error al realizar la solicitud de OCR: \(error.localizedDescription)") } } } private func filterValidCoordinates(from strings: [String]) -&gt; [String: String] { var result: [String: String] = [:] print("Texto antes de filtrar: \(strings)") for string in strings { let trimmedString = string.replacingOccurrences(of: " ", with: "") if parent.useLettersForColumns { let pattern = "^[A-J]\\d{1,2}$" // Letras de A-J seguidas de 1 o 2 dígitos if trimmedString.range(of: pattern, options: .regularExpression) != nil { print("Coordenada válida detectada (letras): \(trimmedString)") result[trimmedString] = "Valor" // Asignación de prueba } } else { let pattern = "^[1-9]\\d{0,1}$" // Solo números, de 1 a 99 if trimmedString.range(of: pattern, options: .regularExpression) != nil { print("Coordenada válida detectada (números): \(trimmedString)") result[trimmedString] = "Valor" } } } print("Coordenadas finales después de filtrar: \(result)") return result } } }
0
0
555
Jan ’25
RealityKit - Full 3D experience
I have a question I guess more for the Apple team. But why are there no totally 3D experiences for the Vision Pro lineup? I know they have given us tools to implement unity 3D games into iPhone and I guess you can also build it in RealityKit. But why at this moment are 3D games limited to just iPad and iPhone and can't you bring that into Vision Pro? Just to explain. When I say a totally 3D game, I mean games like Gorn. I mean the Vision Pro is definitely powerful enough, but it just feels limited to tabletop games and AR games. Is this something Apple is thinking about implementing?
0
0
522
Oct ’25
How does Game Kit offline leaderboard submission work?
I do not understand how offline leaderboard submission is supposed to work in Game Kit: While the documentation briefly states that offline submission is supported, how is that even possible when you first have to fetch a leaderboard object in order to then call its submitScore function? How can I get the leaderboard object in the first place when offline? Can anyone enlighten me how this works? Or maybe point me to some relevant documentation?
0
0
30
11h
ParticleEmitterComponent Position Offset Issue After iOS 26.1 Update – Seeking Solutions & Workarounds
Problem Summary After upgrading to iOS 26.1 and 26.2, I'm experiencing a particle positioning bug in RealityKit where ParticleEmitterComponent particles render at an incorrect offset relative to their parent entity. This behavior does not occur on iOS 18.6.2 or earlier versions, suggesting a regression introduced in the newer OS builds. Environment Details Operating System: iOS 26.1 & iOS 26.2 Framework: RealityKit Xcode Version: 16.2 (16C5032a) Expected vs. Actual Behavior Expected: Particles should render at the position of the entity to which the ParticleEmitterComponent is attached, matching the behavior on iOS 18.6.2 and earlier. Actual: Particles appear away from their parent entity, creating a visual misalignment that breaks the intended AR experience. Steps to Reproduce Create or open an AR application with RealityKit that uses particle components Attach a ParticleEmitterComponent to an entity via a custom system Run the application on iOS 26.1 or iOS 26.2 Observe that particles render at an offset position away from the entity Minimal Code Example Here's the setup from my test case: Custom Component & System: struct SparkleComponent4: Component {} class SparkleSystem4: System { static let query = EntityQuery(where: .has(SparkleComponent4.self)) required init(scene: Scene) {} func update(context: SceneUpdateContext) { for entity in context.scene.performQuery(Self.query) { // Only add once if entity.components.has(ParticleEmitterComponent.self) { continue } var newEmitter = ParticleEmitterComponent() newEmitter.mainEmitter.color = .constant(.single(.red)) entity.components.set(newEmitter) } } } AR Setup: let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true) let model = Entity() model.components.set(ModelComponent(mesh: boxMesh, materials: [material])) model.components.set(SparkleComponent4()) model.position = [0, 0.05, 0] model.name = "MyCube" let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: [0.2, 0.2])) anchor.addChild(model) arView.scene.addAnchor(anchor) Questions for the Community Has anyone else encountered this particle positioning issue after updating to iOS 26.1/26.2? Are there known workarounds or configuration changes to ParticleEmitterComponent that restore correct positioning? Is this a confirmed bug, or could there be a change in coordinate system handling or transform inheritance that I'm missing? Additional Information I've already submitted this issue via Feedback Assistant(FB21346746)
0
0
261
2d
Support for clock() shader instruction in MSL similar to VK_KHR_shader_clock instructions
Hi, seems MSL is missing support for a clock() shader instruction available in other graphics APIs like Vulkan or OpenGL for example.. useful for counting cost in number of clock cycles of some code insider shader with much finer granularity than launching a micro kernel with same instructions and measuring cycles cost from CPU.. also useful for MoltenVK to support that extensions.. thanks..
1
0
120
Apr ’25
Metal and Swift PM
I have run into an issue where I am trying to use atomic_float in a swift package but I cannot get things to compile because it appears that the Swift Package Manager doesn't support Metal 3 (atomic_float is Metal 3 functionality). Is there any way around this? I am using // swift-tools-version: 6.1 and my Metal code includes: #include <metal_stdlib> #include <metal_geometric> #include <metal_math> #include <metal_atomic> using namespace metal; kernel void test(device atomic_float* imageBuffer [[buffer(1)]], uint id [[ thread_position_in_grid ]]) { } But I get an error on the definition of atomic_float . Any help, one more importantly, where I could have found this information about this limitation, would be helpful. -RadBobby
0
0
101
Apr ’25
Metal 4: When is it ok to dealloc a MTLBuffer's memory
I have something like this drawing in an MTKView (see at bottom). I am finding it difficult to figure out when can the Swift-land resources used in making the MTLBuffer(s) be released? Below, for example, is it ok if args goes out of scope (or is otherwise deallocated) at point 1, 2, or 3? Or perhaps even earlier, as soon as argsBuffer has been created? I have been reading through various articles such as Setting resource storage modes Choosing a resource storage mode for Apple GPUs Copying data to a private resource but it's a lot to absorb and I haven't been really able to find an authoritative description of the required lifetime of the resources in CPU land. I should mention that this is Metal 4 code. In previous versions of Metal, the MTLCommandBuffer had the ability to add a completion handler to be called by the GPU after it has finished running the commands in the buffer but in Metal 4 there is no such thing (it it were even needed for the purpose I am interested in). Any advice and/or pointers to the definitive literature will be appreciated. guard let argsBuffer = device.makeBuffer(bytes: &args,... argumentTable.setAddress(argsBuffer.gpuAddress, ... encoder.setArgumentTable(argumentTable, stages: .vertex) // encode drawing renderEncoder.draw... ... encoder.endEncoding() // 1 commandBuffer.endCommandBuffer() // 2 commandQueue.waitForDrawable(drawable) commandQueue.commit([commandBuffer]) // 3 commandQueue.signalDrawable(drawable) drawable.present()
0
0
52
1w
Query GPU metrics
Hello! I'm a developer working on a plugin for the Elgato Stream Deck, called GPU Metrics. The plugin currently only works on Windows but I'd like to bring it to macOS. However, based on forum posts I've read (and StackOverflow) there isn't a very clear path to query GPU metrics like usage, temperature, used GPU memory, and power consumption. There are some tools out there that do similar things, but I wanted to see what would be the recommendation from Apple's engineering team to get this data via a public API. Requirements: Access GPU utilization, temperature, memory usage, power usage C/C++ based API for querying the metrics so I can expose the data to JavaScript via Node Addon No need to compatibile with Intel-based Macs, as Apple silicon will be fine for now Plugin GitHub Thank you! Noah
0
0
124
May ’25
iOS Metal system delayed one Vsync period to really display the frame on the screen
View Layout Add the following views in a view controller: Label View A, with a subview of the same size: MTKView A View B, with a subview of the same size: MTKView B Refresh Rates of Each View The label view refreshes at 60fps (driven by CADisplayLink). MTKView A and B refresh at 15fps. MTKView Implementation Details The corresponding CAMetalLayer's maximumDrawableCount is set to 2, changed to double buffering. The scheduling mechanism is modified; drawing is not driven by the internal loop but is done manually. The draw call is triggered immediately upon receiving a frame. self.metalView.enableSetNeedsDisplay = NO; self.metalView.paused = YES; A new high-priority queue is created for drawing, instead of handling it on the main queue. MTKView Latency Tracking The GPU completion time T1 is observed through the addCompletedHandler callback of the CommandBuffer. The presentation time T2 of the frame is observed through the addPresentedHandler callback of the currentDrawable in MTKView. Testing shows that T2 - T1 > 16.6ms (the Vsync period at 60Hz). This means that after the GPU rendering in MTLView is finished, the frame is not actually displayed at the next Vsync instruction but only at the Vsync instruction after that. I believe there is an extra 16.6ms of latency here, which I want to eliminate by adjusting the rendering mechanism. Observation from Instruments From Instruments, the Surface presentation aligns with the above test results. After the Metal encoder finishes, the Surface in Display switches only after the next-next Vsync instruction. See the image in the link for details. Questions According to a beginner's understanding, after MTKView's GPU rendering is finished, the next Vsync instruction should officially display (make it visible). However, this is not what is observed. Does the subview MTKView need to wait for another Vsync cycle to be drawn to the actual display buffer? The label updates its text at 60fps, so the entire interface should be displayed at 60fps. Is the content of MTKView not synchronized when the display happens? Explanation of the Reasoning Behind Some MTKView Code Details Changing from the default triple buffering to double buffering helps reduce the latency introduced by rendering. Not using MTKView's own scheduling mechanism but using manual triggering of the draw method is because MTKView's own scheduling mechanism is driven by CADisplayLink. Therefore, if a frame falls within a Vsync window, it needs to wait for the next Vsync window to trigger the draw operation, which introduces waiting latency.
3
0
491
1w
Achievement Banners Not always Showing
When running on my iPhone SE3 under IOS 18.4.1, achievement banners show as expected. The same code running on my iPad Air2 under IOS 15.8.4, achievement banners do not show, but they are accepted (as shown in the GameCenterViewController). The banners also don't show when running the simulator under iPhone 16 Pro Max under IOS 18.2 or simulator under iPhone SE3 under IOS 18.3. I haven't tried others. [Note that I clear the achievements each run during test so that I can duplicate this]
0
0
78
May ’25
Metal 4 Argument Tables
I am puzzled by the setAddress(_:attributeStride:index:) of MTL4ArgumentTable. Can anyone please explain what the attributeStride parameter is for? The doc says that it is "The stride between attributes in the buffer." but why? Who uses this for what? On the C++ side in the shaders the stride is determined by the C++ type, as far as I know. What am I missing here? Thanks!
0
0
448
3w
Game Center Achievement "Global Players" Always Showing 0% Issue
Hello, I’m the developer of the “ StepSquad” app. Our app uses the Game Center achievement feature, but we’ve been encountering a problem: the “Global Players” metric always shows 0%, even though there are friends who have already achieved these achievements. Initially, I thought it might be because the app was newly launched. However, it’s now been over two months since release, and it’s still showing 0%. If anyone has any insight into this issue, please leave a comment.
0
0
556
Jan ’25