Post

Replies

Boosts

Views

Activity

Crash in `outlined init with copy of` when run in Release Mode
Hey there, When I run the following 50 lines of code in release mode, or turn Optimization on in Build-Settings Swift Compiler - Code Generation I will get the following crash. Anyone any idea why that happens? (Xcode 13.4.1, happens on Device as well as simulator on iOS 15.5 and 15.6) Example Project: https://github.com/Bersaelor/ResourceCrashMinimalDemo #0 0x000000010265dd58 in assignWithCopy for Resource () #1 0x000000010265d73c in outlined init with copy of Resource<VoidPayload, String> () #2 0x000000010265d5dc in specialized Resource<>.init(url:method:query:authToken:headers:) [inlined] at /Users/konradfeiler/Source/ResourceCrashMinimalDemo/ResourceCrashMinimalDemo/ContentView.swift:51 #3 0x000000010265d584 in specialized ContentView.crash() at /Users/konradfeiler/Source/ResourceCrashMinimalDemo/ResourceCrashMinimalDemo/ContentView.swift:18 Code needed: import SwiftUI struct ContentView: View {     var body: some View {         Button(action: { crash() }, label: { Text("Create Resouce") })     }     /// crashes in `outlined init with copy of Resource<VoidPayload, String>`     func crash() {         let testURL = URL(string: "https://www.google.com")!         let r = Resource<VoidPayload, String>(url: testURL, method: .get, authToken: nil)         print("r: \(r)")     } } struct VoidPayload {} enum HTTPMethod<Payload> {     case get     case post(Payload)     case patch(Payload) } struct Resource<Payload, Response> {     let url: URL     let method: HTTPMethod<Payload>     let query: [(String, String)]     let authToken: String?     let parse: (Data) throws -> Response } extension Resource where Response: Decodable {     init(         url: URL,         method: HTTPMethod<Payload>,         query: [(String, String)] = [],         authToken: String?,         headers: [String: String] = [:]     ) {         self.url = url         self.method = method         self.query = query         self.authToken = authToken         self.parse = {             return try JSONDecoder().decode(Response.self, from: $0)         }     } }
2
2
3.8k
Aug ’22
ScrollViewReader scrollTo ignores withAnimation-Duration
I tried animating the scrollTo() like so, as described in the docs. - https://developer.apple.com/documentation/swiftui/scrollviewreader swift withAnimation { scrollProxy.scrollTo(index, anchor: .center) } the result is the same as if I do swift withAnimation(Animation.easeIn(duration: 20)) {     scrollProxy.scrollTo(progress.currentIndex, anchor: .center) } I tried this using the example from the ScrollViewReader docs. With the result that up and down scrolling has exactly the same animation. struct ScrollingView: View {     @Namespace var topID     @Namespace var bottomID     var body: some View {         ScrollViewReader { proxy in             ScrollView {                 Button("Scroll to Bottom") {                     withAnimation {                         proxy.scrollTo(bottomID)                     }                 }                 .id(topID)                 VStack(spacing: 0) {                     ForEach(0..100) { i in                         color(fraction: Double(i) / 100)                             .frame(height: 32)                     }                 }                 Button("Top") {                     withAnimation(Animation.linear(duration: 20)) {                         proxy.scrollTo(topID)                     }                 }                 .id(bottomID)             }         }     }     func color(fraction: Double) - Color {         Color(red: fraction, green: 1 - fraction, blue: 0.5)     } } struct ScrollingView_Previews: PreviewProvider {     static var previews: some View {         ScrollingView()     } }
12
7
5.1k
May ’24
How to get multiple animations into USDZ
Most models are only available as glb or fbx, so I usually reexport them into usdz using Blender. When I import them into Reality Composer Pro, Mesh, Textures etc look great, but in the Animation Library subsection all I can see is one default subtree animation. In Blender I can see all available animations and play them individually. The default subtree animation just plays the default idle animation. In fact when I open the nonlinear animation view in Blender and select a different animation as the default animation, the exported usdz shows the newly selected animation as default subtree animation. I can see in the Apple sample apps models can have multiple animations in their Animation Library. I'm using the latest Blender 4.5 and the usdz exporter should be working properly?
3
1
819
Oct ’25
SwiftUI Preview fails with "RemoteHumanReadableError: Could not connect to agent"
I added a basic Hello World SwiftUI view to an existing UIKit project, yet I can not get the preview to work. Usual error is: MessageSendFailure: Message send failure for send render message to agent ================================== | RemoteHumanReadableError: Could not connect to agent | | Bootstrap timeout after 8.0s waiting for connection from 'Identity(pid: 30286, sceneIdentifier: Optional("XcodePreviews-30286-133-static"))' on service com.apple.dt.uv.agent-preview-service Neither this nor the generated report is very helpful. I also created a new Xcode project to see if the same View works in a new test project, which it does. If my project compiles without warnings and errors, but SwiftUI preview fails, what are the options I have left? (My deployment target is IOS14, my Xcode is the fresh Xcode 13.0)
1
0
1.1k
Oct ’21
Showing a MTLTexture on an Entity in RealityKit
Is there any standard way of efficiently showing a MTLTexture on a RealityKit Entity? I can't find anything proper on how to , for example, generate a LowLevelTexture out of a MTLTexture. Closest match was this two year old thread. In the old SceneKit app, we would just do guard let material = someNode.geometry?.materials.first else { return } material.diffuse.contents = mtlTexture Our flow is as follows (for visualizing the currently detected object): Camera-Stream -> CoreML Segmentation -> Send the relevant part of the MLShapedArray-Tensor to a MTLComputeShader that returns a MTLTexture -> Show the resulting texture on a 3D object to the user
5
0
1.1k
Sep ’25
RealityView doesn't free up memory after disappearing
Basically, take just the Xcode 26 AR App template, where we put the ContentView as the detail end of a NavigationStack. Opening app, the app uses < 20MB of memory. Tapping on Open AR the memory usage goes up to ~700MB for the AR Scene. Tapping back, the memory stays up at ~700MB. Checking with Debug memory graph I can still see all the RealityKit classes in the memory, like ARView, ARRenderView, ARSessionManager. Here's the sample app to illustrate the issue. PS: To keep memory pressure on the system low, there should be a way of freeing all the memory the AR uses for apps that only occasionally show AR scenes.
0
1
189
Sep ’25
Xcode Cloud builds don't work with *.usdz files in a RealityComposer package
In courses like Compose interactive 3D content in Reality Composer Pro Realitykit Engineers recommended working with Reality Composer Pro to create RealityKit packages to embed in our Realitykit Xcode projects. And, comparing the workflow to Unity/Unreal, I can see the reasoning since it is nice to prepare scenes/materials/assets visually. Now when we also want to run a Xcode Cloud CI/CD pipeline this seems to come into conflict: When adding a basic *.usdz to the RealityKitContent.rkassets folder, every build we run on Xcode cloud fails with: Compile Reality Asset RealityKitContent.rkassets ❌realitytool requires Metal for this operation and it is not available in this build environment I have also found this related forum post here but it was specifically about compiling a *.skybox.
4
1
622
Sep ’25
Anchoring a Prim to the face doesn't work
Aloha Quick Lookers, I'm using the usdzconvert preview 0.64 to create *.usdz files, which I then edit in ascii *.usda format, and then recipe them as *.usdz. This way I was able to fix the scale (the original gltf's usually are in m=1, while the usdzconvert 0.64 always sets the result to m=0.01). Now I was trying to follow the docs to anchor my Glasses prim on the users face and whatever I try, it will only ever place it on my tables surface. If I import my *.usdz file into Reality Composer and export it to usdz It does get anchored to the face correctly. Now when I open the Reality-Composer-Export-usdz it really doesn't look so different from my manually edited usdz (it just wraps the Geometry in another layer, I assume because the import-export through Reality-Composer). What am I doing wrong? Here's the Reality Composer generated usda: #usda 1.0 ( autoPlay = false customLayerData = { string creator = "com.apple.RCFoundation Version 1.5 (171.5)" string identifier = "9AAF5C5D-68AB-4034-8037-9BBE6848D8E5" } defaultPrim = "Root" metersPerUnit = 1 timeCodesPerSecond = 60 upAxis = "Y" ) def Xform "Root" { def Scope "Scenes" ( kind = "sceneLibrary" ) { def Xform "Scene" ( customData = { bool preliminary_collidesWithEnvironment = 0 string sceneName = "Scene" } sceneName = "Scene" ) { token preliminary:anchoring:type = "face" quatf xformOp:orient = (0.70710677, 0.70710677, 0, 0) double3 xformOp:scale = (1, 1, 1) double3 xformOp:translate = (0, 0, 0) uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:orient", "xformOp:scale"] // ... and here is my own , minimal, usdz with the same face-anchoring-token: #usda 1.0 ( autoPlay = false customLayerData = { string creator = "usdzconvert preview 0.64" } defaultPrim = "lupetto_local" metersPerUnit = 1 timeCodesPerSecond = 60 upAxis = "Y" ) def Xform "lupetto_local" ( assetInfo = { string name = "lupetto_local" } kind = "component" ) { def Scope "Geom" { def Xform "Glasses" { token preliminary:anchoring:type = "face" double3 xformOp:translate = (0, 0.01799999736249447, 0.04600000008940697) uniform token[] xformOpOrder = ["xformOp:translate"] // ... I'd add the full files, but they are 2Mb of size and the max file size is 200kb. I could create a minimal example file with less geometry in case the above code is not enough.
1
0
1.4k
Jun ’21
Opening a new terminal window or tab is extremely slow
Working with a M1 Macbook Air, macos 12.4. Anytime I open a new terminal window or just a new tab, it takes a really long time till I can type. I have commented out my entire ~/.zshrc and when run for i in $(seq 1 10); do /usr/bin/time $SHELL -i -c exit; done directly in an open terminal window it says it finished in 0.1s. So it must be something macos is doing before zsh is even starting. PS: In the activity monitor I can only see a spike in kernel_task cpu usage when opening a new terminal
1
0
1.9k
Jun ’22
[26] audioTimeRange would still be interesting for .volatileResults in SpeechTranscriber
So experimenting with the new SpeechTranscriber, if I do: let transcriber = SpeechTranscriber( locale: locale, transcriptionOptions: [], reportingOptions: [.volatileResults], attributeOptions: [.audioTimeRange] ) only the final result has audio time ranges, not the volatile results. Is this a performance consideration? If there is no performance problem, it would be nice to have the option to also get speech time ranges for volatile responses. I'm not presenting the volatile text at all in the UI, I was just trying to keep statistics about the non-speech and the speech noise level, this way I can determine when the noise level falls under the noisefloor for a while. The goal here was to finalize the recording automatically, when the noise level indicate that the user has finished speaking.
6
0
756
Nov ’25
SpeechTranscriber/SpeechAnalyzer being relatively slow compared to FoundationModel and TTS
So, I've been wondering how fast a an offline STT -> ML Prompt -> TTS roundtrip would be. Interestingly, for many tests, the SpeechTranscriber (STT) takes the bulk of the time, compared to generating a FoundationModel response and creating the Audio using TTS. E.g. InteractionStatistics: - listeningStarted: 21:24:23 4480 2423 - timeTillFirstAboveNoiseFloor: 01.794 - timeTillLastNoiseAboveFloor: 02.383 - timeTillFirstSpeechDetected: 02.399 - timeTillTranscriptFinalized: 04.510 - timeTillFirstMLModelResponse: 04.938 - timeTillMLModelResponse: 05.379 - timeTillTTSStarted: 04.962 - timeTillTTSFinished: 11.016 - speechLength: 06.054 - timeToResponse: 02.578 - transcript: This is a test. - mlModelResponse: Sure! I'm ready to help with your test. What do you need help with? Here, between my audio input ending and the Text-2-Speech starting top play (using AVSpeechUtterance) the total response time was 2.5s. Of that time, it took the SpeechAnalyzer 2.1s to get the transcript finalized, FoundationModel only took 0.4s to respond (and TTS started playing nearly instantly). I'm already using reportingOptions: [.volatileResults, .fastResults] so it's probably as fast as possible right now? I'm just surprised the STT takes so much longer compared to the other parts (all being CoreML based, aren't they?)
2
0
624
Aug ’25
FromToByAnimation triggers availableAnimations not the single bone animation
So, I was trying to animate a single bone using FromToByAnimation, but when I start the animation, the model instead does the full body animation stored in the availableAnimations. If I don't run testAnimation nothing happens. If I run testAnimation I see the same animation as If I had called entity.playAnimation(entity.availableAnimations[0],..) here's the full code I use to animate a single bone: func testAnimation() { guard let jawAnim = jawAnimation(mouthOpen: 0.4) else { print("Failed to create jawAnim") return } guard let creature, let animResource = try? AnimationResource.generate(with: jawAnim) else { return } let controller = creature.playAnimation(animResource, transitionDuration: 0.02, startsPaused: false) print("controller: \(controller)") } func jawAnimation(mouthOpen: Float) -> FromToByAnimation<JointTransforms>? { guard let basePose else { return nil } guard let index = basePose.jointNames.firstIndex(of: jawBoneName) else { print("Target joint \(self.jawBoneName) not found in default pose joint names") return nil } let fromTransforms = basePose.jointTransforms let baseJawTransform = fromTransforms[index] let maxAngle: Float = 40 let angle: Float = maxAngle * mouthOpen * (.pi / 180) let extraRot = simd_quatf(angle: angle, axis: simd_float3(x: 0, y: 0, z: 1)) var toTransforms = basePose.jointTransforms toTransforms[index] = Transform( scale: baseJawTransform.scale * 2, rotation: baseJawTransform.rotation * extraRot, translation: baseJawTransform.translation ) let fromToBy = FromToByAnimation<JointTransforms>( jointNames: basePose.jointNames, name: "jaw-anim", from: fromTransforms, to: toTransforms, duration: 0.1, bindTarget: .jointTransforms, repeatMode: .none, ) return fromToBy } PS: I can confirm that I can set this bone to a specific position if I use guard let index = newPose.jointNames.firstIndex(of: boneName) ... let baseTransform = basePose.jointTransforms[index] newPose.jointTransforms[index] = Transform( scale: baseTransform.scale, rotation: baseTransform.rotation * extraRot, translation: baseTransform.translation ) skeletalComponent.poses.default = newPose creatureMeshEntity.components.set(skeletalComponent) This works for manually setting the bone position, so the jawBoneName and the joint-transformation can't be that wrong.
1
0
317
Aug ’25
Rendering scene in RealityView to an Image
Is there any way to render a RealityView to an Image/UIImage like we used to be able to do using SCNView.snapshot() ? ImageRenderer doesn't work because it renders a SwiftUI view hierarchy, and I need the currently presented RealityView with camera background and 3D scene content the way the user sees it I tried UIHostingController and UIGraphicsImageRenderer like extension View { func snapshot() -> UIImage { let controller = UIHostingController(rootView: self) let view = controller.view let targetSize = controller.view.intrinsicContentSize view?.bounds = CGRect(origin: .zero, size: targetSize) view?.backgroundColor = .clear let renderer = UIGraphicsImageRenderer(size: targetSize) return renderer.image { _ in view?.drawHierarchy(in: view!.bounds, afterScreenUpdates: true) } } } but that leads to the app freezing and sending an infinite loop of [CAMetalLayer nextDrawable] returning nil because allocation failed. Same thing happens when I try return renderer.image { ctx in view.layer.render(in: ctx.cgContext) } Now that SceneKit is deprecated, I didn't want to start a new app using deprecated APIs.
3
0
1.2k
Sep ’25
Sometimes stream via AVPlayer plays Audio but Video is just grey
In our app, we're streaming short HLS streams to local AVPlayers. In the view we have an AVPlayerLayer which is connected to the AVPlayer we start, and usually we hear audio and see the video. Sometimes, instead of video, we'll see a grey screen, while still being able to hear the audio. The playerlayer will be a shade of grey, which is not a shade coming from our app. Also if for testing purposes we don't connect the player to the PlayerLayer, this grey color will not be there, so it's definitely coming from the AVPlayerLayer. If this is somehow related to an HLSStream becoming corrupted or something, what are the API's in AVFoundation we could use to debug this? So far, when this happens we see no peculiar catch blocks or errors being thrown in our debug logs.
0
0
1.2k
Mar ’21
Crash in `outlined init with copy of` when run in Release Mode
Hey there, When I run the following 50 lines of code in release mode, or turn Optimization on in Build-Settings Swift Compiler - Code Generation I will get the following crash. Anyone any idea why that happens? (Xcode 13.4.1, happens on Device as well as simulator on iOS 15.5 and 15.6) Example Project: https://github.com/Bersaelor/ResourceCrashMinimalDemo #0 0x000000010265dd58 in assignWithCopy for Resource () #1 0x000000010265d73c in outlined init with copy of Resource<VoidPayload, String> () #2 0x000000010265d5dc in specialized Resource<>.init(url:method:query:authToken:headers:) [inlined] at /Users/konradfeiler/Source/ResourceCrashMinimalDemo/ResourceCrashMinimalDemo/ContentView.swift:51 #3 0x000000010265d584 in specialized ContentView.crash() at /Users/konradfeiler/Source/ResourceCrashMinimalDemo/ResourceCrashMinimalDemo/ContentView.swift:18 Code needed: import SwiftUI struct ContentView: View {     var body: some View {         Button(action: { crash() }, label: { Text("Create Resouce") })     }     /// crashes in `outlined init with copy of Resource<VoidPayload, String>`     func crash() {         let testURL = URL(string: "https://www.google.com")!         let r = Resource<VoidPayload, String>(url: testURL, method: .get, authToken: nil)         print("r: \(r)")     } } struct VoidPayload {} enum HTTPMethod<Payload> {     case get     case post(Payload)     case patch(Payload) } struct Resource<Payload, Response> {     let url: URL     let method: HTTPMethod<Payload>     let query: [(String, String)]     let authToken: String?     let parse: (Data) throws -> Response } extension Resource where Response: Decodable {     init(         url: URL,         method: HTTPMethod<Payload>,         query: [(String, String)] = [],         authToken: String?,         headers: [String: String] = [:]     ) {         self.url = url         self.method = method         self.query = query         self.authToken = authToken         self.parse = {             return try JSONDecoder().decode(Response.self, from: $0)         }     } }
Replies
2
Boosts
2
Views
3.8k
Activity
Aug ’22
ScrollViewReader scrollTo ignores withAnimation-Duration
I tried animating the scrollTo() like so, as described in the docs. - https://developer.apple.com/documentation/swiftui/scrollviewreader swift withAnimation { scrollProxy.scrollTo(index, anchor: .center) } the result is the same as if I do swift withAnimation(Animation.easeIn(duration: 20)) {     scrollProxy.scrollTo(progress.currentIndex, anchor: .center) } I tried this using the example from the ScrollViewReader docs. With the result that up and down scrolling has exactly the same animation. struct ScrollingView: View {     @Namespace var topID     @Namespace var bottomID     var body: some View {         ScrollViewReader { proxy in             ScrollView {                 Button("Scroll to Bottom") {                     withAnimation {                         proxy.scrollTo(bottomID)                     }                 }                 .id(topID)                 VStack(spacing: 0) {                     ForEach(0..100) { i in                         color(fraction: Double(i) / 100)                             .frame(height: 32)                     }                 }                 Button("Top") {                     withAnimation(Animation.linear(duration: 20)) {                         proxy.scrollTo(topID)                     }                 }                 .id(bottomID)             }         }     }     func color(fraction: Double) - Color {         Color(red: fraction, green: 1 - fraction, blue: 0.5)     } } struct ScrollingView_Previews: PreviewProvider {     static var previews: some View {         ScrollingView()     } }
Replies
12
Boosts
7
Views
5.1k
Activity
May ’24
How to get multiple animations into USDZ
Most models are only available as glb or fbx, so I usually reexport them into usdz using Blender. When I import them into Reality Composer Pro, Mesh, Textures etc look great, but in the Animation Library subsection all I can see is one default subtree animation. In Blender I can see all available animations and play them individually. The default subtree animation just plays the default idle animation. In fact when I open the nonlinear animation view in Blender and select a different animation as the default animation, the exported usdz shows the newly selected animation as default subtree animation. I can see in the Apple sample apps models can have multiple animations in their Animation Library. I'm using the latest Blender 4.5 and the usdz exporter should be working properly?
Replies
3
Boosts
1
Views
819
Activity
Oct ’25
SwiftUI Preview fails with "RemoteHumanReadableError: Could not connect to agent"
I added a basic Hello World SwiftUI view to an existing UIKit project, yet I can not get the preview to work. Usual error is: MessageSendFailure: Message send failure for send render message to agent ================================== | RemoteHumanReadableError: Could not connect to agent | | Bootstrap timeout after 8.0s waiting for connection from 'Identity(pid: 30286, sceneIdentifier: Optional("XcodePreviews-30286-133-static"))' on service com.apple.dt.uv.agent-preview-service Neither this nor the generated report is very helpful. I also created a new Xcode project to see if the same View works in a new test project, which it does. If my project compiles without warnings and errors, but SwiftUI preview fails, what are the options I have left? (My deployment target is IOS14, my Xcode is the fresh Xcode 13.0)
Replies
1
Boosts
0
Views
1.1k
Activity
Oct ’21
Showing a MTLTexture on an Entity in RealityKit
Is there any standard way of efficiently showing a MTLTexture on a RealityKit Entity? I can't find anything proper on how to , for example, generate a LowLevelTexture out of a MTLTexture. Closest match was this two year old thread. In the old SceneKit app, we would just do guard let material = someNode.geometry?.materials.first else { return } material.diffuse.contents = mtlTexture Our flow is as follows (for visualizing the currently detected object): Camera-Stream -> CoreML Segmentation -> Send the relevant part of the MLShapedArray-Tensor to a MTLComputeShader that returns a MTLTexture -> Show the resulting texture on a 3D object to the user
Replies
5
Boosts
0
Views
1.1k
Activity
Sep ’25
RealityView doesn't free up memory after disappearing
Basically, take just the Xcode 26 AR App template, where we put the ContentView as the detail end of a NavigationStack. Opening app, the app uses < 20MB of memory. Tapping on Open AR the memory usage goes up to ~700MB for the AR Scene. Tapping back, the memory stays up at ~700MB. Checking with Debug memory graph I can still see all the RealityKit classes in the memory, like ARView, ARRenderView, ARSessionManager. Here's the sample app to illustrate the issue. PS: To keep memory pressure on the system low, there should be a way of freeing all the memory the AR uses for apps that only occasionally show AR scenes.
Replies
0
Boosts
1
Views
189
Activity
Sep ’25
Xcode Cloud builds don't work with *.usdz files in a RealityComposer package
In courses like Compose interactive 3D content in Reality Composer Pro Realitykit Engineers recommended working with Reality Composer Pro to create RealityKit packages to embed in our Realitykit Xcode projects. And, comparing the workflow to Unity/Unreal, I can see the reasoning since it is nice to prepare scenes/materials/assets visually. Now when we also want to run a Xcode Cloud CI/CD pipeline this seems to come into conflict: When adding a basic *.usdz to the RealityKitContent.rkassets folder, every build we run on Xcode cloud fails with: Compile Reality Asset RealityKitContent.rkassets ❌realitytool requires Metal for this operation and it is not available in this build environment I have also found this related forum post here but it was specifically about compiling a *.skybox.
Replies
4
Boosts
1
Views
622
Activity
Sep ’25
Anchoring a Prim to the face doesn't work
Aloha Quick Lookers, I'm using the usdzconvert preview 0.64 to create *.usdz files, which I then edit in ascii *.usda format, and then recipe them as *.usdz. This way I was able to fix the scale (the original gltf's usually are in m=1, while the usdzconvert 0.64 always sets the result to m=0.01). Now I was trying to follow the docs to anchor my Glasses prim on the users face and whatever I try, it will only ever place it on my tables surface. If I import my *.usdz file into Reality Composer and export it to usdz It does get anchored to the face correctly. Now when I open the Reality-Composer-Export-usdz it really doesn't look so different from my manually edited usdz (it just wraps the Geometry in another layer, I assume because the import-export through Reality-Composer). What am I doing wrong? Here's the Reality Composer generated usda: #usda 1.0 ( autoPlay = false customLayerData = { string creator = "com.apple.RCFoundation Version 1.5 (171.5)" string identifier = "9AAF5C5D-68AB-4034-8037-9BBE6848D8E5" } defaultPrim = "Root" metersPerUnit = 1 timeCodesPerSecond = 60 upAxis = "Y" ) def Xform "Root" { def Scope "Scenes" ( kind = "sceneLibrary" ) { def Xform "Scene" ( customData = { bool preliminary_collidesWithEnvironment = 0 string sceneName = "Scene" } sceneName = "Scene" ) { token preliminary:anchoring:type = "face" quatf xformOp:orient = (0.70710677, 0.70710677, 0, 0) double3 xformOp:scale = (1, 1, 1) double3 xformOp:translate = (0, 0, 0) uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:orient", "xformOp:scale"] // ... and here is my own , minimal, usdz with the same face-anchoring-token: #usda 1.0 ( autoPlay = false customLayerData = { string creator = "usdzconvert preview 0.64" } defaultPrim = "lupetto_local" metersPerUnit = 1 timeCodesPerSecond = 60 upAxis = "Y" ) def Xform "lupetto_local" ( assetInfo = { string name = "lupetto_local" } kind = "component" ) { def Scope "Geom" { def Xform "Glasses" { token preliminary:anchoring:type = "face" double3 xformOp:translate = (0, 0.01799999736249447, 0.04600000008940697) uniform token[] xformOpOrder = ["xformOp:translate"] // ... I'd add the full files, but they are 2Mb of size and the max file size is 200kb. I could create a minimal example file with less geometry in case the above code is not enough.
Replies
1
Boosts
0
Views
1.4k
Activity
Jun ’21
Opening a new terminal window or tab is extremely slow
Working with a M1 Macbook Air, macos 12.4. Anytime I open a new terminal window or just a new tab, it takes a really long time till I can type. I have commented out my entire ~/.zshrc and when run for i in $(seq 1 10); do /usr/bin/time $SHELL -i -c exit; done directly in an open terminal window it says it finished in 0.1s. So it must be something macos is doing before zsh is even starting. PS: In the activity monitor I can only see a spike in kernel_task cpu usage when opening a new terminal
Replies
1
Boosts
0
Views
1.9k
Activity
Jun ’22
[26] audioTimeRange would still be interesting for .volatileResults in SpeechTranscriber
So experimenting with the new SpeechTranscriber, if I do: let transcriber = SpeechTranscriber( locale: locale, transcriptionOptions: [], reportingOptions: [.volatileResults], attributeOptions: [.audioTimeRange] ) only the final result has audio time ranges, not the volatile results. Is this a performance consideration? If there is no performance problem, it would be nice to have the option to also get speech time ranges for volatile responses. I'm not presenting the volatile text at all in the UI, I was just trying to keep statistics about the non-speech and the speech noise level, this way I can determine when the noise level falls under the noisefloor for a while. The goal here was to finalize the recording automatically, when the noise level indicate that the user has finished speaking.
Replies
6
Boosts
0
Views
756
Activity
Nov ’25
SpeechTranscriber/SpeechAnalyzer being relatively slow compared to FoundationModel and TTS
So, I've been wondering how fast a an offline STT -> ML Prompt -> TTS roundtrip would be. Interestingly, for many tests, the SpeechTranscriber (STT) takes the bulk of the time, compared to generating a FoundationModel response and creating the Audio using TTS. E.g. InteractionStatistics: - listeningStarted: 21:24:23 4480 2423 - timeTillFirstAboveNoiseFloor: 01.794 - timeTillLastNoiseAboveFloor: 02.383 - timeTillFirstSpeechDetected: 02.399 - timeTillTranscriptFinalized: 04.510 - timeTillFirstMLModelResponse: 04.938 - timeTillMLModelResponse: 05.379 - timeTillTTSStarted: 04.962 - timeTillTTSFinished: 11.016 - speechLength: 06.054 - timeToResponse: 02.578 - transcript: This is a test. - mlModelResponse: Sure! I'm ready to help with your test. What do you need help with? Here, between my audio input ending and the Text-2-Speech starting top play (using AVSpeechUtterance) the total response time was 2.5s. Of that time, it took the SpeechAnalyzer 2.1s to get the transcript finalized, FoundationModel only took 0.4s to respond (and TTS started playing nearly instantly). I'm already using reportingOptions: [.volatileResults, .fastResults] so it's probably as fast as possible right now? I'm just surprised the STT takes so much longer compared to the other parts (all being CoreML based, aren't they?)
Replies
2
Boosts
0
Views
624
Activity
Aug ’25
FromToByAnimation triggers availableAnimations not the single bone animation
So, I was trying to animate a single bone using FromToByAnimation, but when I start the animation, the model instead does the full body animation stored in the availableAnimations. If I don't run testAnimation nothing happens. If I run testAnimation I see the same animation as If I had called entity.playAnimation(entity.availableAnimations[0],..) here's the full code I use to animate a single bone: func testAnimation() { guard let jawAnim = jawAnimation(mouthOpen: 0.4) else { print("Failed to create jawAnim") return } guard let creature, let animResource = try? AnimationResource.generate(with: jawAnim) else { return } let controller = creature.playAnimation(animResource, transitionDuration: 0.02, startsPaused: false) print("controller: \(controller)") } func jawAnimation(mouthOpen: Float) -> FromToByAnimation<JointTransforms>? { guard let basePose else { return nil } guard let index = basePose.jointNames.firstIndex(of: jawBoneName) else { print("Target joint \(self.jawBoneName) not found in default pose joint names") return nil } let fromTransforms = basePose.jointTransforms let baseJawTransform = fromTransforms[index] let maxAngle: Float = 40 let angle: Float = maxAngle * mouthOpen * (.pi / 180) let extraRot = simd_quatf(angle: angle, axis: simd_float3(x: 0, y: 0, z: 1)) var toTransforms = basePose.jointTransforms toTransforms[index] = Transform( scale: baseJawTransform.scale * 2, rotation: baseJawTransform.rotation * extraRot, translation: baseJawTransform.translation ) let fromToBy = FromToByAnimation<JointTransforms>( jointNames: basePose.jointNames, name: "jaw-anim", from: fromTransforms, to: toTransforms, duration: 0.1, bindTarget: .jointTransforms, repeatMode: .none, ) return fromToBy } PS: I can confirm that I can set this bone to a specific position if I use guard let index = newPose.jointNames.firstIndex(of: boneName) ... let baseTransform = basePose.jointTransforms[index] newPose.jointTransforms[index] = Transform( scale: baseTransform.scale, rotation: baseTransform.rotation * extraRot, translation: baseTransform.translation ) skeletalComponent.poses.default = newPose creatureMeshEntity.components.set(skeletalComponent) This works for manually setting the bone position, so the jawBoneName and the joint-transformation can't be that wrong.
Replies
1
Boosts
0
Views
317
Activity
Aug ’25
ARCoachingOverlayView replacement for RealityView
I thought the ARCoachingOverlayView was a nice touch, so each apps ARKit coaching was recognizable and I used it in my ARView/ARSCNView based apps. Now with RealityView, is there any replacement planned? Or should we just use UIViewRepresentable and wrap ARCoachingOverlayView?
Replies
1
Boosts
0
Views
533
Activity
Sep ’25
Rendering scene in RealityView to an Image
Is there any way to render a RealityView to an Image/UIImage like we used to be able to do using SCNView.snapshot() ? ImageRenderer doesn't work because it renders a SwiftUI view hierarchy, and I need the currently presented RealityView with camera background and 3D scene content the way the user sees it I tried UIHostingController and UIGraphicsImageRenderer like extension View { func snapshot() -> UIImage { let controller = UIHostingController(rootView: self) let view = controller.view let targetSize = controller.view.intrinsicContentSize view?.bounds = CGRect(origin: .zero, size: targetSize) view?.backgroundColor = .clear let renderer = UIGraphicsImageRenderer(size: targetSize) return renderer.image { _ in view?.drawHierarchy(in: view!.bounds, afterScreenUpdates: true) } } } but that leads to the app freezing and sending an infinite loop of [CAMetalLayer nextDrawable] returning nil because allocation failed. Same thing happens when I try return renderer.image { ctx in view.layer.render(in: ctx.cgContext) } Now that SceneKit is deprecated, I didn't want to start a new app using deprecated APIs.
Replies
3
Boosts
0
Views
1.2k
Activity
Sep ’25
Sometimes stream via AVPlayer plays Audio but Video is just grey
In our app, we're streaming short HLS streams to local AVPlayers. In the view we have an AVPlayerLayer which is connected to the AVPlayer we start, and usually we hear audio and see the video. Sometimes, instead of video, we'll see a grey screen, while still being able to hear the audio. The playerlayer will be a shade of grey, which is not a shade coming from our app. Also if for testing purposes we don't connect the player to the PlayerLayer, this grey color will not be there, so it's definitely coming from the AVPlayerLayer. If this is somehow related to an HLSStream becoming corrupted or something, what are the API's in AVFoundation we could use to debug this? So far, when this happens we see no peculiar catch blocks or errors being thrown in our debug logs.
Replies
0
Boosts
0
Views
1.2k
Activity
Mar ’21