Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

Metal texture allocated size versus actual image data size
Hello. In the iOS app i'm working on we are very tight on memory budget and I was looking at ways to reduce our texture memory usage. However I noticed that comparing ASTC8x8 to ASTC12x12, there is no actual difference in allocated memory for most of our textures despite ASTC12x12 having less than half the bpp of 8x8. The difference between the two only becomes apparent for textures 1024x1024 and larger, and even in that case the actual texture data is sometimes only 60% of the allocation size. I understand there must be some alignment and padding going on, but this seems extreme. For an example scene in my app with astc12x12 for most textures there is over a 100mb difference in astc size on disk versus when loaded, so I would love to be able to recover even a portion of that memory. Here is some test code with some measurements i've taken using an iphone 11: for(int i = 0; i < 11; i++) { MTLTextureDescriptor *texDesc = [[MTLTextureDescriptor alloc] init]; texDesc.pixelFormat = MTLPixelFormatASTC_12x12_LDR; int dim = 12; int n = 2 << i; int mips = i+1; texDesc.width = n; texDesc.height = n; texDesc.mipmapLevelCount = mips; texDesc.resourceOptions = MTLResourceStorageModeShared; texDesc.usage = MTLTextureUsageShaderRead; // Calculate the equivalent astc texture size int blocks = 0; if(mips == 1) { blocks = n/dim + (n%dim>0? 1 : 0); blocks *= blocks; } else { for(int j = 0; j < mips; j++) { int a = 2 << j; int cur = a/dim + (a%dim>0? 1 : 0); blocks += cur*cur; } } auto tex = [objCObj newTextureWithDescriptor:texDesc]; printf("%dx%d, mips %d, Astc: %d, Metal: %d\n", n, n, mips, blocks*16, (int)tex.allocatedSize); } MTLPixelFormatASTC_12x12_LDR 128x128, mips 7, Astc: 2768, Metal: 6016 256x256, mips 8, Astc: 10512, Metal: 32768 512x512, mips 9, Astc: 40096, Metal: 98304 1024x1024, mips 10, Astc: 158432, Metal: 262144 128x128, mips 1, Astc: 1936, Metal: 4096 256x256, mips 1, Astc: 7744, Metal: 16384 512x512, mips 1, Astc: 29584, Metal: 65536 1024x1024, mips 1, Astc: 118336, Metal: 147456 MTLPixelFormatASTC_8x8_LDR 128x128, mips 7, Astc: 5488, Metal: 6016 256x256, mips 8, Astc: 21872, Metal: 32768 512x512, mips 9, Astc: 87408, Metal: 98304 1024x1024, mips 10, Astc: 349552, Metal: 360448 128x128, mips 1, Astc: 4096, Metal: 4096 256x256, mips 1, Astc: 16384, Metal: 16384 512x512, mips 1, Astc: 65536, Metal: 65536 1024x1024, mips 1, Astc: 262144, Metal: 262144 I also tried using MTLHeaps (placement and automatic) hoping they might be better, but saw nearly the same numbers. Is there any way to have metal allocate these textures in a more compact way to save on memory?
8
0
2.8k
Mar ’25
Diagnose data access latency
The code is pretty simple kernel void naive( constant RunParams *param [[ buffer(0) ]], const device float *A [[ buffer(1) ]], // [N, K] device float *output [[ buffer(2) ]], uint2 gid [[ thread_position_in_grid ]]) { uint a_ptr = gid.x * param->K; for (uint i = 0; i < param->K; i++, a_ptr++) { val += A[b_ptr]; } output[ptr] = val; } when uint a_ptr = gid.x * param->K, the code got 150 GFLops when uint a_ptr = gid.y * param->K, the code got 860 GFLops param->K = 256; thread per group: [16, 16] I'd like to understand why the performance is so different, and how can I profile/diagnose this to help with further optimization.
0
0
91
Apr ’25
How to apply the same SystemImage to both mainEmitter and spawnedEmitter without clipping in ParticleEmitterComponent?
Hi everyone, I’m currently learning about ParticleEmitterComponentParticleEmitterComponent and exploring the sample app provided in the Simulating particles in your visionOS app documentation. In the sample app, when I set the EmitterPreset to fireworks from the settings panel on the left side of the window and choose SystemImage, I noticed two issues: The image applied to mainEmitter appears clipped or cropped. The image on spawnedEmitter does not update to the selected SystemImage. What I want to achieve: Apply the same SystemImage to both mainEmittermainEmitter and spawnedEmitterspawnedEmitter so that it displays correctly without clipping. Remove the animation that changes the size of spawnedEmitterspawnedEmitter over time and keep it at a constant size. Could someone explain which properties should be adjusted to achieve this behavior? Any guidance or examples would be greatly appreciated. Thanks in advance!
0
0
474
Sep ’25
Animations for streaming
We have a macOS app (not yet released, but in use by ourselves), that provides scoreboards for streaming sport events. Today it is expected, that there are nice animations for goals, etc. We are streaming using NDI, which requires a CVPixelBuffer for each frame. We currently create these animations using CABasicAnimation, CAAnimation and CAKeyframeAnimation. In addition we use ScreenCaptureKit to generate the frames. This works fine with 25/30 fps, as long as the window where our animations are performed in is visible. But this is not what it should be. We have a smaller window as main app window and control display performing the animations in reduced size, while the streaming animations need to be in HD format and later maybe in 4K. When using an offscreen window, the animations are not calculated. We get 1 frame per second or so. So we actually have to connect an external display to the MacBook and open the large windows there. Ugly solution. Do we use a completely wrong approach? Or is there a way to tell the macOS to perform the animations although it is an offscreen window? If it cannot work that way, what is an alternative?
0
0
141
May ’25
Issue with Non-Consumable In-App Purchase in Unity (iOS Sandbox Environment)
Question: I'm encountering an issue with in-app purchases (IAP) in Unity, specifically for a non-consumable product in the iOS sandbox environment. Below are the details: Environment: Unity Version: 2022.3.55f1 Unity In-App Purchasing Version: v4.12.2 Device: iPhone (15, iOS 18.1.1) Connection: Wi-Fi iOS Settings: In-App Purchases set to “Allowed” initially Problem Behavior: I attempted to purchase a non-consumable item for the first time. The payment is successfully completed by entering the password. I then background the game app and navigate to the iOS Settings to set In-App Purchases to "Don't Allow." After returning to the game and either closing or killing the app, I try to purchase the same non-consumable item again. I checked canMakePayments() through the Apple configuration, and the app correctly detected that I could not make purchases due to the restriction. I then navigate back to Settings and set In-App Purchases to "Allow." Upon returning to the game, I try purchasing the non-consumable item again. A pop-up appears, saying, "You’ve already purchased this. Would you like to get it again for free?" The issue is: Will it deduct money for the second time, and why is the system allowing the user to purchase the same non-consumable item multiple times after purchasing it once? Is this the expected behavior for Unity In-App Purchasing, or is there something I might be missing in handling non-consumable purchases in this scenario? Additional Information: I’ve confirmed that the "In-App Purchases" are set to “Allowed” before attempting the purchase again. I understand that non-consumable products should not be purchased more than once, so I’m unsure why the system is offering to let the user purchase it again. I appreciate any insights into whether this is expected behavior or if I need to adjust how I handle the purchase flow.
1
0
460
Apr ’25
Morphing Animation Not Playing in an Xcode VisionOS App
I have a 3D model with morphing animation that works correctly in Blender. I exported this model as a USDZ file and tried to display it in an Xcode-developed visionOS app, but the morphing animation does not play. What I Have Tried: Morphing animation works correctly in Blender. After exporting to USDZ, the morphing animation does not play in the Xcode app. Linear motion animations (such as object movement) work fine. Behavior in Reality Converter: GLB files do not display. USDZ files load, but morphing animations do not play. What I Want to Know: Is there a way to play morphing animations in an Xcode-developed app? Does RealityKit support morphing animations? Can morphing animations be played in an Xcode-developed app? If RealityKit does not support morphing animations, what alternative methods can be used to play them? I am looking for a way to use the existing animations without recreating them. Additional Information: I have both the Blender file (where animations work) and the USDZ file (where animations do not play). I am developing a visionOS app using Xcode. Any advice or solutions would be greatly appreciated. Thank you in advance!
2
0
520
Feb ’25
HidHide on MacOS
I was wondering if there's a method on MacOS to have my application hide a hid device such as a game controller and instead have the receiving game/application see my app's virtual controller? Is this possible via DriverKit or some other form of kernel level coding? On Windows we have a tool known as HidHide that hids a game controller from all other applications. Is it possible to implement such behavior into an app or is that system level?
1
0
568
Dec ’25
Game Porting Toolkit installation fails: CMake compatibility error in game-porting-toolkit-compiler
I'm encountering a build failure when trying to install the Game Porting Toolkit via Homebrew. The installation fails during the game-porting-toolkit-compiler dependency build phase with a CMake compatibility error. Error Message: CMake Error at CMakeLists.txt:3 (cmake_minimum_required): Compatibility with CMake < 3.5 has been removed from CMake. Update the VERSION argument <min> value. Or, use the <min>...<max> syntax to tell CMake that the project requires at least <min> but has been updated to work with policies introduced by <max> or earlier. Or, add -DCMAKE_POLICY_VERSION_MINIMUM=3.5 to try configuring anyway. -- Configuring incomplete, errors occurred! Environment: macOS: 15.6.1 (Sequoia) Homebrew: 5.0.1 CMake: 3.20.2 Architecture: x86_64 (via Rosetta) Formula: apple/apple/game-porting-toolkit-compiler v0.1 Source: crossover-sources-22.1.1.tar.gz Steps to Reproduce: Install x86_64 Homebrew for Rosetta compatibility Run: arch -x86_64 /usr/local/bin/brew install apple/apple/game-porting-toolkit Build fails during dependency installation Root Cause: The LLVM/Clang sources included in crossover-sources-22.1.1.tar.gz contain a CMakeLists.txt file that specifies a minimum CMake version lower than 3.5. Modern CMake versions (3.5+) have removed backward compatibility with these older version requirements. Potential Solutions: Update the Homebrew formula to patch the CMakeLists.txt with cmake_minimum_required(VERSION 3.5) or higher Update to newer CrossOver sources with updated CMake requirements Add the -DCMAKE_POLICY_VERSION_MINIMUM=3.5 flag to the CMake build command in the formula Is this a known issue? Are there plans to update the formula or the source package to resolve this compatibility problem? Any guidance on a workaround would be appreciated. Full log available at: /Users/kentarovadney/Library/Logs/Homebrew/game-porting-toolkit-compiler/02.cmake.log Thanks for any assistance!
1
0
934
Nov ’25
Broadcast Upload Extension
I am trying to use Broadcast upload extension but Broadcast picker starts countdown and stops (swiftUI). Steps i followed. added BroadcastUploadExtension as target same app group for for main app and extension added packages using SPM i seems the extension functions are not getting triggered, i check using UIScreen.main.isCaptured also which always comes as false. i tried Using Logs which never Appeared.
1
0
623
Mar ’25
MacOS Catalina 10.15.7 CoreGraphic.framework not find symbol
I recently needed to develop an application to obtain the window list, which requires Screen Recording permissions. Apple's official documentation mentions using the two functions CGPreflightScreenCaptureAccess and CGRequestScreenCaptureAccess to request permissions. These functions are stated to be available since version 10.15. However, when I used these two functions on a device running macOS 10.15.7, I encountered the errors shown in the attached screenshot. I used the nm tool to inspect the symbols in the CoreGraphics.framework and found that these two functions were not present. Could you help me understand why this is happening?
0
0
93
May ’25
Implementing Scalable Order-Independent Transparency (OIT) in Metal
Hi, Apple’s documentation on Order-Independent Transparency (OIT) describes an approach using image blocks, where an array of size 4 is allocated per fragment to store depth and color in a tile shading compute pass. However, when increasing the scene’s depth complexity by adding more overlapping quads, the OIT implementation fails due to the fixed array size. Is there a way to dynamically allocate storage for fragments based on actual depth complexity encountered during rasterization, rather than using a fixed-size array? Specifically, can an adaptive array of fragments be maintained and sorted by depth, where the size grows as needed instead of being limited to 4 entries? Any insights or alternative approaches would be greatly appreciated. Thank you!
1
0
569
Mar ’25
Core Image recipe for QR code icon image
Create the QRCode CIFilter<CIBlendWithMask> *f = CIFilter.QRCodeGenerator; f.message = [@"Message" dataUsingEncoding:NSASCIIStringEncoding]; f.correctionLevel = @"Q"; // increase level CIImage *qrcode = f.outputImage; Overlay the icon CIImage *icon = [CIImage imageWithURL:url]; CGAffineTransform *t = CGAffineTransformMakeTranslation( (qrcode.extent.width-icon.extent.width)/2.0, (qrcode.extent.height-icon.extent.height)/2.0); icon = [icon imageByApplyingTransform:t]; qrcode = [icon imageByCompositingOver:qrcode]; Round off the corners static dispatch_once_t onceToken; static CIWarpKernel *k; dispatch_once(&onceToken, ^ { k = [CIWarpKernel kernelWithFunctionName:name fromMetalLibraryData:metalLibData() error:nil]; }); CGRect iExtent = image.extent; qrcode = [k applyWithExtent:qrcode.extent roiCallback:^CGRect(int i, CGRect r) { return CGRectInset(r, -radius, -radius); } inputImage:qrcode arguments:@[[CIVector vectorWithCGRect:qrcode.extent], @(radius)]]; …and this code for the kernel should go in a separate .ci.metal source file: float2 bend_corners (float4 extent, float s, destination dest) { float2 p, dc = dest.coord(); float ratio = 1.0; // Round lower left corner p = float2(extent.x+s,extent.y+s); if (dc.x < p.x && dc.y < p.y) { float2 d = abs(dc - p); ratio = min(d.x,d.y)/max(d.x,d.y); ratio = sqrt(1.0 + ratio*ratio); return (dc - p)*ratio + p; } // Round lower right corner p = float2(extent.x+extent.z-s, extent.y+s); if (dc.x > p.x && dc.y < p.y) { float2 d = abs(dc - p); ratio = min(d.x,d.y)/max(d.x,d.y); ratio = sqrt(1.0 + ratio*ratio); return (dc - p)*ratio + p; } // Round upper left corner p = float2(extent.x+s,extent.y+extent.w-s); if (dc.x < p.x && dc.y > p.y) { float2 d = abs(dc - p); ratio = min(d.x,d.y)/max(d.x,d.y); ratio = sqrt(1.0 + ratio*ratio); return (dc - p)*ratio + p; } // Round upper right corner p = float2(extent.x+extent.z-s, extent.y+extent.w-s); if (dc.x > p.x && dc.y > p.y) { float2 d = abs(dc - p); ratio = min(d.x,d.y)/max(d.x,d.y); ratio = sqrt(1.0 + ratio*ratio); return (dc - p)*ratio + p; } return dc; }
0
0
107
Mar ’25
Xcode Metal Trace
Code is download from apple official metal4 sample [https://developer.apple.com/documentation/metal/drawing-a-triangle-with-metal-4?language=objc] enable metal gpu trace in macOS schema and trace a frame in Xcode. Xcode may show segment fault on App from some 'GTTrace' function when click trace button. When replay a .gputrace file, Xcode may crash , throw an internal error or a XPC error. The example code using old metal-renderer can trace without any problem and everything works fine. Test Environment: Xcode Version 26.2 (17C52) macOS 26.2 (25C56) M1 Pro 16GB A2442
2
0
405
5d
2 high scores vanished
In my game 854159268 (com.1791entertainment.qugame), in my quMostRecent3 leaderboard, the top 2 entries have 'vanished'. They were there yesterday. I know these players have played today, as I see their scores on other leaderboards. Any ideas how to get these back? These 2 players (me and my tester) are both TestFlight ing - not sure if that changes things.
1
0
407
Jan ’26
PhotogrammetrySession crashes after update from iOS 18 to iOS 26
After updating iPad/iPhone devices from iOS 18 to iOS 26, PhotogrammetrySession intermittently crashes during photogrammetry processing. The same workflow was stable on iOS 18 with no code changes to the app. Environment: OS versions: Works on OS 18, crashes on OS 26 Device: iPad/iPhone (reproducible across devices) Source images: ~170-200 JPG files at 2160 x 3840 resolution Reproduction: The crash occurs consistently on the second or third sequential run of the photogrammetry session with the same image set. First run typically succeeds. Crash details: Xcode shows an uncaught exception during image processing: terminating due to uncaught exception of type std::bad_alloc: std::bad_alloc VTPixelTransferSession 420f sid 269 (2160.00 x 3840.00) [0.00 0.00 2160 3840] rowbytes( 2160, 2160 ) Color( (null), 0x0, (null), (null), ITU_R_601_4 ) => 24 sid 19 (2160.00 x 3840.00) [0.00 0.00 2160 3840] rowbytes( 6528 ) Color( 0x0, (null), (null), (null) ) This appears to be a memory allocation failure in VTPixelTransferSession during color space conversion. Has anyone else experienced similar crashes with CorePhotogrammetry on iOS 26, or found workarounds?
0
0
301
Dec ’25
VisionOS VideoMaterial on 3D Mesh
I'm trying to get video material to work on an imported 3D asset, and this asset is a USDC file. There's actually an example in this WWDC video from Apple. You can see it running on the flag in this airplane, but there are no examples of this, and there are no other examples on the internet. Does anybody know how to do this? You can look at 10:34 in this video. https://developer.apple.com/documentation/realitykit/videomaterial
2
0
980
Dec ’25
Score range of ImageAestheticsScoresObservation in Vision framework
Hi everyone, I'm using the Vision framework’s ImageAestheticsScoresObservation class (https://developer.apple.com/documentation/vision/imageaestheticsscoresobservation). I noticed that the overallScore returned sometimes gives negative values. Could someone confirm whether the expected range of the score is from -1.0 to 1.0? The documentation doesn’t explicitly state the possible score range, so I’d appreciate any clarification or insights. Thanks in advance!
0
0
114
Apr ’25