Post

Replies

Boosts

Views

Activity

Reply to Tests on Xcode Cloud with Apps importing CoreML
Thank you, I've created FB20529610 and attached *.xctestproducts.zip build artifact, I hope that allows you to reproduce the issue? When you say @DTS Engineer Be sure to include links to failing builds in the report. you mean the url when opening the build via AppStoreConnect, right? (Like https://appstoreconnect.apple.com/teams/{teamit}/apps/}appid}/ci/builds/{buildid}/summary ) Because when looking at it in xcode there is no share button or copy url, is there?
Oct ’25
Reply to Xcode Cloud builds don't work with *.usdz files in a RealityComposer package
Steps to reproduce: create default Xcode ARKit RealityKit template app open Reality Composer Pro, create new project, choose a folder somewhere in the above created app add RealityKitContent as dependency under Frameworks, Libraries, and Embedded Content drag the example Usdz into the reality composer project, maybe add it to base scene setup Xcode cloud basic build Result would be the above error. Additionally it always takes > 2min to fail: If I remove the *.usdz file it works (so it's not like the metal toolchain isn't installed or realitytool not working) Here's the example file, I shared it via dropbox since I can't attach usdz's here. It's just the monkey head from blender exported as usdz and crushed using usdcrush https://www.dropbox.com/scl/fi/f7b7dsyspby1zabk26932/monkey_crushed.usdz?rlkey=feilrul7gx6naxkbun7oaezv0&st=8afj5ewt&dl=0
Sep ’25
Reply to Showing a MTLTexture on an Entity in RealityKit
Mhmm, in my first simple Test I tried: @MainActor private static func generateTexture(width: Int, height: Int) throws -> LowLevelTexture { return try LowLevelTexture(descriptor: .init(pixelFormat: .rgba8Unorm_srgb, width: width, height: height, depth: 1, mipmapLevelCount: 1, textureUsage: [.shaderWrite, .shaderRead])) } @MainActor init(textureSize: SIMD2<Int>) async throws { lowLevelTexture = try Self.generateTexture(width: textureSize.x, height: textureSize.y) let textureResource = try await TextureResource(from: lowLevelTexture) var descriptor = UnlitMaterial.Program.Descriptor() descriptor.blendMode = .add let program = await UnlitMaterial.Program(descriptor: descriptor) material = UnlitMaterial(program: program) material.color = .init(texture: .init(textureResource)) material.opacityThreshold = 0.0 // Enable transparency material.blending = .transparent(opacity: 1.0) } @MainActor mutating func setTextureSize(_ textureSize: SIMD2<Int>) throws { lowLevelTexture = try Self.generateTexture(width: textureSize.x, height: textureSize.y) let textureResource = try TextureResource(from: lowLevelTexture) material.color = .init(texture: .init(textureResource)) } mutating func blitMTLTextureIntoLowLevelTexture(_ mtlTexture: MTLTexture) { let size = self.textureSize guard mtlTexture.width == size.x, mtlTexture.height == size.y else { Logger.ar.error("MTLTexture size \(mtlTexture.width)x\(mtlTexture.height) does not match LowLevelTexture size \(size.x)x\(size.y)") return } MetalHelper.blitTextures(from: mtlTexture, to: lowLevelTexture) } And then the blit method: static func blitTextures(from inTexture: MTLTexture, to lowLevelTexture: LowLevelTexture) { guard let commandQueue = sharedCommandQueue else { Logger.ml.error("Failed to get command queue") return } guard let commandBuffer = commandQueue.makeCommandBuffer() else { Logger.ml.error("Failed to create command buffer") return } guard let blitEncoder = commandBuffer.makeBlitCommandEncoder() else { Logger.ml.error("Failed to create compute encoder") return } commandBuffer.enqueue() defer { blitEncoder.endEncoding() commandBuffer.commit() } let outTexture: MTLTexture = lowLevelTexture.replace(using: commandBuffer) blitEncoder.copy(from: inTexture, to: outTexture) } which compiles and runs without error, but I only see a pink mesh.
Topic: Graphics & Games SubTopic: RealityKit Tags:
Sep ’25
Reply to Rendering scene in RealityView to an Image
Oh, so this is for a new app where we use CoreML and machine vision to detect animals in the AR scene and show details on them. I just realized, when I want to convert our existing SceneKit-based production Glasses-Try-On app to RealityKit, I'll face the same problem. In our glasses app, allowing the user to take a screenshot of them with Virtual Glasses on is a core-feature for which we need 3D Scene + Camera background (without the UI).
Topic: Spatial Computing SubTopic: ARKit Tags:
Sep ’25
Reply to How to get multiple animations into USDZ
Thank you for the clarification, I rewatched that WWDC lecture again. What I ended up doing is: export the main model with textures, uv etc with the main idle animation export each other animation as a usdz only containing the mesh & animation import all into RCP in the scene editor, add the main model, and then click (+) in the Animation Library, and import the other animations from the other usdz's in code, import the scene and take the entity out of the scene, not directly out of the RealityKitPackage This probably will inflate the download size of the app slightly, as the pure mesh is like 300kb and ends up duplicated in the app bundle. But from a workflow perspective, if the designer changes a single animation, I can import it without the risk of getting the start-end wrong. PS: I filed a feedback in feedback assistant, because I feel like both options are sub-optimal.
Aug ’25
Reply to SpeechTranscriber/SpeechAnalyzer being relatively slow compared to FoundationModel and TTS
Ah, nice, let's see, first baseline without prepareToAnalyze: The KPI I'm interested is the time between the last audio above the noise-ground level and the final transcript (e.g. between the user stopping to speak and the transcription being ready to trigger actions): n: 11, avg: 2.2s, Var: 0.75 Then, with calling prepareToAnalyze: n: 11, avg: 1.45s, Var: 1.305 (the delay varied greatly between 0.05s and 3s) So yeah, based on this small sample, preparing did seem to decrease the delay.
Topic: Media Technologies SubTopic: Audio Tags:
Aug ’25
Reply to [26] audioTimeRange would still be interesting for .volatileResults in SpeechTranscriber
Consider using the SpeechDetector module in conjunction with SpeechTranscriber. SpeechDetector performs a similar voice activity detection function and integrates with SpeechTranscriber. thank you, so i've been using SpeechDetector like so for a while: let detector = SpeechDetector(detectionOptions: SpeechDetector.DetectionOptions(sensitivityLevel: .medium), reportResults: true) if analyzer == nil { analyzer = SpeechAnalyzer(modules: [detector, transcriber], options: SpeechAnalyzer.Options(priority: .high, modelRetention: .processLifetime)) } self.analyzerFormat = await SpeechAnalyzer.bestAvailableAudioFormat(compatibleWith: [transcriber]) (inputSequence, inputBuilder) = AsyncStream<AnalyzerInput>.makeStream() Task { for try await result in detector.results { print("result: \(result.description)]") } } recognizerTask = Task { // .. but I have never seen any result: in the logs. Is there any API where SpeechDetector would tell my app when it thinks the speech is over? The docs say This module asks “is there speech?” and provides you with the ability to gate transcription by the presence of voices, saving power otherwise used by attempting to transcribe what is likely to be silence. but this seems to be happening behind the scenes, without getting direct feedback. At the moment, I keep observing the input volume, and once it is below my estimated noise-floor for about a 1 sec I stop the recording. I do this so I can trigger the next even programmatically without cutting of the users speech mid-sentence. The apps user flow does not involve a "start"/"stop" recording button, so I need to end recordings without automatically to create a seamless flow.
Topic: Media Technologies SubTopic: Audio Tags:
Aug ’25