Hi everyone,
I faced an issue that on IOS 26 removeAnnotation method doesn't remove annotation. This code worked on previous versions (IOS 18, 17) but suddenly stopped working on IOS 26.
Has anyone faced this issue?
guard let document = await pdfView.document else { return }
for pageIndex in 0..<document.pageCount {
guard let page = document.page(at: pageIndex) else { continue }
let annotations = page.annotations
for annotation in annotations {
page.removeAnnotation(annotation)
}
}
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello everyone,
I'm working on a visionOS application using RealityKit and am encountering a common coordinate system challenge when integrating 3D models created in Blender.
My goal is to display and dynamically update the Transform (position, rotation, scale) of models created in Blender within RealityKit.
The issue arises because Blender's default coordinate system is Z-up, and while exporting to USD/USDZ, I don't have a reliable "Y-up" export option that correctly reorients the model and its transform data for RealityKit's Y-up convention. This means I'm essentially exporting models with their "up" direction along the Z-axis.
When I load these Z-up exported models into RealityKit, they are often oriented incorrectly. To then programmatically update their Transform (e.g., move them, rotate them based on game logic, or apply physics), I need to ensure that the Transform values I set align with RealityKit's Y-up system, even though the original model data was authored in a Z-up context.
My questions are:
What is the recommended transformation process (e.g., using simd_quatf or simd_float4x4) to convert a Transform that was conceptually defined in a Z-up coordinate system to RealityKit's Y-up coordinate system? Specifically, when I have a Transform (or its translation, rotation, scale components) from a Z-up context, how should I apply this to a RealityKit Entity so it appears and behaves correctly in a Y-up world?
Are there any existing convenience APIs or helper functions within RealityKit, simd, or other Apple frameworks that simplify this Z-up to Y-up Transform conversion process? Or is a manual application of a transformation quaternion (e.g., simd_quatf(angle: -.pi / 2, axis: [1, 0, 0])) the standard approach?
Any guidance, code examples, or best practices from those who have faced similar challenges would be greatly appreciated!
Thank you.
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
Reality Composer
RealityKit
Reality Composer Pro
visionOS
Hi I have attempted to find a fix for my issue via documentation online and one phone support ( not code level support ) call to no end. I could continue to try various things but would like to see if someone else has encountered this issue and a fix for it.
Background: My Game app is live on App Store and has 1 classic leaderboard . I am now getting ready to submit an update to the app and it also entails adding a new recurring leaderboard. I added the leaderboard in App Store. I however have NOT uploaded my new build yet. I have also not added my leaderboards ( currently live and not live ) to any set.
When I try to submit scores using
GKLeaderboard.submitScore(_:context:player:leaderboardIDs:completionHandler:) to the new non-live leaderboard it works ( gives me no error )
When I try to load the scores from the new non-live leaderboard
GKLeaderboard.loadLeaderboards(IDs:completionHandler:)
loadEntries(for:timeScope:range:completionHandler:)
it fails. Error: "leaderboardID not found"
I could try ( and will )
uploading the new build to AppStore connect and associating the new leaderboard to it before testing again.
try associating each leaderboard to a set
Is there anything else that I should be aware of ?
Thanks in advance
Hi everyone,
I'm building a native iOS app using Unreal Engine 5.6 with Firebase for authentication and Firestore. The app uses a MetaHuman avatar and is meant to run as a standalone UE app on iPhone.
I'm using this Firebase wrapper:
👉 https://pandoa.github.io/FirebaseFeatures/
I've followed all the steps, including:
Adding GoogleService-Info.plist to the Xcode project and ensuring it’s in the correct target
Calling FIRApp.configure() in AppDelegate
Verifying the plist is bundled correctly
However, the app crashes on launch, and Firebase does not initialize properly.
Crash log shows:
[FirebaseCore][I-COR000005] No app has been configured yet.
Setup details:
Unreal Engine: 5.6 (source build, macOS)
iOS Deployment: 17.5
MetaHuman character packaged correctly and app launches fine without Firebase
Has anyone here managed to get Firebase working inside a native Unreal Engine iOS app with this setup? I'd love to hear if there’s something I’m missing — maybe something with initialization timing or module loading?
Thanks so much in advance 🙏
Topic:
Graphics & Games
SubTopic:
General
I'm a newbee at Vulkan and Xcode.
I have my project on github https://github.com/flocela/OrangeSpider/
Whenever I run, two windows open instead of only one.
I added testing, which means I have an OrangeSpider.xctestplan in the OrangeSpider/TestsOrangeSpider/ folder.
This is my first time adding testing to an XCode project, so I think this may be where the problem is.
I also get this error message:
ViewBridge to RemoteViewService Terminated: Error Domain=com.apple.ViewBridge Code=18 "(null)" UserInfo={com.apple.ViewBridge.error.hint=this process disconnected remote view controller -- benign unless unexpected, com.apple.ViewBridge.error.description=NSViewBridgeErrorCanceled}
Topic:
Graphics & Games
SubTopic:
Metal
Hello, I have some confusion regarding ResidencySet. Specifically, about the requestResidency() function: how often should we call it?
I have a captureOutput(_:didOutput:from:) method that is triggered at 60 or 120 fps. Inside this method, I am calling the following code every frame:
computeResidencySet.removeAllAllocations()
сomputeResidencySet.addAllocation(TextureA)
computeResidencySet.addAllocation(TextureB)
computeResidencySet.addAllocation(TextureC)
computeResidencySet.commit()
computeResidencySet.requestResidency() // Should we call it every frame?
Please keep in mind that TextureA, TextureB, and TextureC are unique for each call (new instances are provided on every frame)."
Hi fellow devs, I have a quick question is it possible to have virtual controllers on Mac. For instance can my app exclusively manage the controller and output it into the Game Controller framework? And create a virtual controller to allow for features such as controller emulation, haptic control, and others.
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS.
I have two questions regarding the newly updated APIs.
From WWDC23 session: "Meet Object Capture for iOS", I know that the Object Capture API uses Point Cloud data captured from iPhone LiDAR sensor. I want to know how to use the Point Cloud data captured on iPhone ObjectCaptureSession and use it to create 3D models on PhotogrammetrySession on MacOS.
From the example code from WWDC21, I know that the PhotogrammetrySession utilizes depth map from captured photo images by embedding it into the HEIC image and use those data to create a 3D asset on PhotogrammetrySession on MacOS. I would like to know if Point Cloud data is also embedded into the image to be used during 3D reconstruction and if not, how else the Point Cloud data is inserted to be used during reconstruction.
Another question is, I know that Point Cloud data is returned as a result from request to the PhtogrammetrySession.Request. I would like to know if this PointCloud data is the same set of data captured during ObjectCaptureSession from WWDC23 that is used to create ObjectCapturePointCloudView.
Thank you to everyone for the help in advance. It's a real pleasure to be developing with all the updates to RealityKit and the Object Capture API.
Hello everyone,
I must have missed something but why isn't there a depthAttachmentPixelFormat to the new Metal 4 MTL4RenderPipelineDescriptor, unlike the old MTLRenderPipelineDescriptor?
So how do you set the depth pixel format?
Thanks in advance!
Hello there,
I'm having trouble matching what I see in the scenekit editor and the output of the resulting scene in a scnview.
For a glitter effect I have set a high value on the diffuse intensity which looks fine in the editor but when running the game the colors are much darker. To see if the intensity value is merely capped I have set the same multiplier on the hat below - but it is blown out which looks to me like there is some grading going on
I have tried to switch on hdr rendering but that didn't make a difference.
I tried disabling linear rendering and that simply made everything darker still - which I expect.
Does someone have an idea what else this could be? What rendering is the scenekit editor using and how can I match it?
Interestingly when I take a screenshot of the editor window for this post, the image is also blown out... what is going on? :)
Thanks so much for any pointers,
Seb
During regular use, RealityKit generates an excessive amount of internal logging that is not actionable by third party developers. When developing an iOS RealityKit/ARKit app, this makes the Xcode console challenging to use for regular work.
(FB19173812)
See screenshots below.
Xcode does have an option for filtering out logging from specific SDKs, but enabling this feature to suppress the logging of RealityKit and related SDKs like PHASE is something developers have to do dozens of times each day. After a year of developing a RealityKit app, this process becomes frustrating.
If SDKs like Foundation, UIKit, and SwiftUI generated as much logging as RealityKit and related SDKs, Xcode's console would be unusable.
Is there any way to disable the logging of RealityKit and PHASE permanently?
Thank you for any help you provide.
Hello!
I'm developing a GPU (shader) language, where I aim to target multiple backends with a common frontend. I wanted to avoid having to round trip through Metal, and go straight to IR just like I have with SPIRV, in order to have a fast and efficient compilation process.
I've been looking for a reference page where I can read about Metals IR, and as far as I'm aware, it exists, but I can't seem to find it anywhere.
Furthermore, if such a reference is available, is there also a toolkit where I can run validation on the output IR, and perhaps even run optimizations, much like spv-tools for SPIRV?
Any help would be appreciated!
Thanks,
Gustav
My app is being rejected and all I'm being told is that it is spam.
I've tried improving various aspects of the game, but I just receive the same copy and paste rejection message each time.
I have no idea if I'm moving in the right direction or what part of my game needs to be changed or improved. Is there a game quality benchmark document or some kind of resource I can use to better understand why my game is being rejected and how to bring it to a level that meets apple's standards.
I'm trying to build an MDLMesh then add normals
let mdlMesh = MDLMesh.newBox(withDimensions: SIMD3<Float>(1, 1, 1),
segments: SIMD3<UInt32>(2, 2, 2),
geometryType: MDLGeometryType.triangles,
inwardNormals:false,
allocator: allocator)
mdlMesh.addNormals(withAttributeNamed: MDLVertexAttributeNormal, creaseThreshold: 0)
When I render the mesh, some normals are (0,0,0). I don't know if the problem is in the mesh, or in the conversion to MTKMesh. Is there a way to examine an MDLMesh with the geometry viewer?
When I look at the variable values for my mdlMesh I get this:
Not too useful. I don't know how to track down the normals.
What's the best way to find out where the normals getting broken?
I'm implementing optimized matmul on metal: https://github.com/crynux-ai/metal-matmul/blob/main/metal/1_shared_mem.metal
I notice that performance is significantly different with different threadgroup memory set in
[computeEncoder setThreadgroupMemoryLength]
All other lines are exactly same, the only difference is this parameter.
Matmul performance is roughly 250 GFLops if I set 32768 (max bytes allowed on this M1 Max),
but 400 GFLops if I set 8192.
Why does this happen? How can I optimize it?
Topic:
Graphics & Games
SubTopic:
Metal
Now that SceneKit has been marked as soft deprecated, is there a planned date or timeframe when it will be completely removed from iOS? I’m concerned about how long my existing SceneKit-based game will continue to work, especially as an indie developer without the resources for a quick rewrite to RealityKit.
Hello, we are working on a iOS game project, as progress, the project grows larger and larger. Because we are using other game dependencies and libraries, here larger and larger refers to the whole project, and our source files integrated and compiled by Xcode are not many. Now, it seems we hit a bottleneck, when I add new files or functions to the previous files to implement a new feature, Xcode compile stucks(stops), it's Indexing | Initializing datastore forever, cannot produce a final build.
macOS 15.1, Xcode 16.2
Can you provide any solutions to solve this problem?
Also submitted Feedback ID #FB18432749
If I compile a compute kernel with a call to texture.read(), it fails with the following error: "Error Domain=AGXMetalG13X Code=3 "Encountered unlowered function call to air.get_read_sampler" UserInfo={NSLocalizedDescription=Encountered unlowered function call to air.get_read_sampler}."
This error occurs on both macOS and iOS 26 Beta 5, but not when running on a simulator or in a playground. It does not occur on a macOS Sequoia VM. It occurs whether I use the old metal 3 or new metal 4 compilation method.
A workaround would be to use a sampler, but according to the feature tables, all platforms support reading from textures of all formats.
Below is a minimal example which produces the error:
let device = MTLCreateSystemDefaultDevice()!
let library = device.makeDefaultLibrary()!
let computeFunction = library.makeFunction(name: "compute_test")!
do {
let pipeline = try device.makeComputePipelineState(function: computeFunction)
debugPrint(pipeline)
} catch {
debugPrint("Metal 3 failed with error:\n\(error)")
}
#import <metal_stdlib>
using namespace metal;
kernel void compute_test(uint2 gid [[thread_position_in_grid]],
texture2d<float, access::read> in [[texture(0)]],
texture2d<float, access::write> out [[texture(1)]]) {
out.write(in.read(gid), gid);
}
I filed feedback FB19530049.
Hello !
We are working on a real-time 2-player online game targeting multiple Apple devices.
The following issue only occurs on tvOS:
When selecting matchmaking to connect with another random player, the native Game Center interface opens and begins the matchmaking process.
Almost immediately after clicking "start", the following log appears in the console, and the matchmaking screen remains indefinitely without completing:
Timeout while starting matching with request: <GKMatchRequestInternal 0x30d62f690> {
defaultNumberOfPlayers : 0
isLateJoin : 0
localPlayerID : U:bea182d69b85f0839e3958742fbc4609
matchType : 0
maxPlayers : 2
minPlayers : 2
playerAttributes : 4294967295
playerGroup : 1
preloadedMatch : 0
recipientPlayerIDs : <__NSArrayM 0x3034ed5c0> {}
recipients : <__NSArrayM 0x3034ee280> {}
restrictToAutomatch : 0
version : 1
archivedSharePlayInviteeTokensFromProgrammaticInvite, inviteMessage, localizableInviteMessage, messagesBasedRecipients, properties, queueName, recipientProperties, rid, sessionToken : (null)
} . Error: (null)
However, as shown in the code snippet below, the task does not complete when the log appears. But when we manually cancel the matchmaking process, the "User cancel" log is correctly triggered.
var gkMatchRequest = GKMatchRequest.Init();
gkMatchRequest.MinPlayers = 2;
gkMatchRequest.MaxPlayers = 2;
var matchRequestTask = GKMatchmakerViewController.Request(gkMatchRequest);
matchRequestTask.ContinueWith(t => { Debug.LogException(t.Exception); }, TaskContinuationOptions.OnlyOnFaulted);
matchRequestTask.ContinueWith(t => { Debug.Log("User cancel"); }, TaskContinuationOptions.OnlyOnCanceled);
matchRequestTask.ContinueWith(t => { Debug.Log("Success"); }, TaskContinuationOptions.OnlyOnRanToCompletion);
We have tested this on multiple Apple TV and network types (Wi-Fi, 5G, Ethernet), but we consistently encounter this bug along with the same log message.
Could you please help us understand or resolve this issue?
Thank you.
Hi there,
Is it possible to customize the Metal Performance HUD on Apple TV, similar to how it can be done on iPhone & iPad?
Would like to see things like Compiled Shaders for my Apps on tvOS
.