Post

Replies

Boosts

Views

Activity

Reply to BlendShapes don’t animate while playing animation in RealityKit
The docs on AnimationGroup say: If two animations on the same property overlap durations at runtime, the one that the framework processes second overwrites the first. That means, I'll have to adjust the animation in the usdz using blender, so it does not use the jaw or neck joint? Then I should be able to animate the jaw / neck simultaneously to the idle animation from my asset, using AnimationGroup?
Topic: Graphics & Games SubTopic: RealityKit Tags:
Jul ’25
Reply to BlendShapes don’t animate while playing animation in RealityKit
The goal is to play facial expressions (like blinking or talking) while a body animation (like waving) is playing. I'm actually working on something similar, wondering the same question. Model imported from usdz with a list of animations (walk, idle, etc). E.g. entity.playAnimation(animations[index], transitionDuration: 0.2, startsPaused: false) I can manipulate joints for the neck or jaw programmatically to adjust the model. By doing: // input variable mouthOpen: Float let target = "Root_M/.../Jaw_M" var newPose = basePose guard let index = newPose.jointNames.firstIndex(of: target) else { return } let baseTransform = basePose.jointTransforms[index] let maxAngle: Float = 40 let angle: Float = maxAngle * mouthOpen * (.pi / 180) let extraRot = simd_quatf(angle: angle, axis: simd_float3(x: 0, y: 0, z: 1)) newPose.jointTransforms[index] = Transform( scale: baseTransform.scale, rotation: baseTransform.rotation * extraRot, translation: baseTransform.translation ) skeletalComponent.poses.default = newPose creatureMeshEntity.components .set(skeletalComponent) I also plan on making the head look at a specific point by manually setting the neck or eye joints rotation. The problem is that playing an animation via entity.playAnimation() will overwrite the jointtransforms and so block the programmatic rotating of joints. Playing a character's walk/idle animation while making them look at a specific spot is a pretty common use case, isn't it?
Topic: Graphics & Games SubTopic: RealityKit Tags:
Jul ’25
Reply to There's wrong with speech detector ios26
Thank you Greg! @DTS Engineer When I use it with the retroactive protocol conformance, it seems to work, but I never see any results (for the reportResults: true) When I try: let detector = SpeechDetector(detectionOptions: SpeechDetector.DetectionOptions(sensitivityLevel: .medium), reportResults: true) if analyzer == nil { analyzer = SpeechAnalyzer(modules: [detector, transcriber], options: SpeechAnalyzer.Options(priority: .high, modelRetention: .processLifetime)) } Task { for try await result in detector.results { print("result: \(result.description)]") } } I never see any of the result prints in the log, while the Transcription works fine. Is the detector.results supposed to be used like that and if so, does it show any response for others?
Topic: Media Technologies SubTopic: Audio Tags:
Jul ’25
Reply to There's wrong with speech detector ios26
SpeechAnalysisModule doesn't exist, SpeechAnalyzer init parameter is called SpeechModule. Doing let modules: [any SpeechModule] = [detector, transcriber] also doesn't work, since it's obviously Cannot convert value of type 'SpeechDetector' to expected element type 'any SpeechModule'. This compiles and runs: let detector = SpeechDetector(detectionOptions: SpeechDetector.DetectionOptions(sensitivityLevel: .medium), reportResults: true) let modules: [any SpeechModule] = [detector as! (any SpeechModule), transcriber] let analyzer = SpeechAnalyzer(modules: modules, options: SpeechAnalyzer.Options(priority: .high, modelRetention: .processLifetime)) but honestly, I see no difference with or without the detector. Actually testing the results via: let detector = SpeechDetector(detectionOptions: SpeechDetector.DetectionOptions(sensitivityLevel: .medium), reportResults: true) let modules: [any SpeechModule] = [detector as! (any SpeechModule), transcriber] Task { for try await result in detector.results { print("result: \(result.description)]") } } also doesn't yield any log lines, so I think while force-casting it to SpeechModule doesn't make the app crash, it's just ignored.
Topic: Media Technologies SubTopic: Audio Tags:
Jul ’25
Reply to Model Guardrails Too Restrictive?
I had a similar experience in Beta 3, even questions like "What is the capital of France?" were hitting guardrails. Tried the same question with a number of real countries, always guardrailed. Then tried with Gondor and Westeros and for those fictional countries the model sent a response. I'm assuming mentioning real country names must have triggered guard rails against political topics. As of Beta 4 my test questions for capitals work for both real and fictional countries.
Jul ’25
Reply to Glass material in USDZ
I think it would help to specify more clearly what shader system you are using? When it comes to ARKit, ARKit is only the framework that matches your camera to the 3D scene, i.e. handling of the odometry etc. The materials are rendered in the 3D shaders used, technically you can use ARKit with shaders from RealityKit, SceneKit and MetalKit. here's a nice writeup on their differences on stackoverflow
Topic: Spatial Computing SubTopic: ARKit Tags:
Dec ’22
Reply to Safari, JavaScript and WebGL extremely bad performance?
We're seeing the same issue in all our WebGL applications. It definitely worked in April 2021, since then without changing any code the Safari performance went from 60fps to 2fps. Example scene built in unity (with a camera-texture since camera input is important for our AR applications): https://test.looc.io/forest/index.html
Topic: Safari & Web SubTopic: General Tags:
Feb ’22
Reply to Anchoring a Prim to the face doesn't work
So it seems just putting the line token preliminary:anchoring:type = "face" in was not enough. What worked was writing a python script, importing the from pxr import Usd, Sdf, UsdGeom, Kind stuff and then creating a scene hierarchy with /Root/Scenes/Scene/Children/MyModel where the scene gets the anchor-token. The USD Classes reference (https://graphics.pixar.com/usd/docs/api/class_usd_stage.html) helped a little but It was way more complicated then I expected it to be.
Topic: App & System Services SubTopic: General Tags:
Jun ’21