Hi, I called it "perspective problem", but I'm not quite sure what it is. I have a tag that I track with builtin camera. I calculate its pose, then use extrinsics and device anchor to calculate where to place entity with model.
When I place an entity that overlaps with physical object and start to look at it from different angles, the virtual object begins to move. Initially I thought that it's something wrong with calculations, or some image distortion closer to camera edges is affecting tag detection. To check, I calculated the position only once and displayed entity there, the physical tracked object is not moving. Now, when I move my head, so the object is more to the left, or right in my field of view, the virtual object becomes misaligned to the left, or right. It feels like a parallax effect, but distance from me to entity and to physical object are exactly the same.
Is that expected, because of some passthrough correction magic? And if so, can I somehow correct it back, so the entity always overlaps with object? I'm currently on v26 beta 5.
I also don't quite understand the camera extrinsics, because it seems that I need to flip it around X by 180 degrees to make it work in deviceAnchor * extrinsics.inverse * tag (shouldn't it be in same coordinates as all other RealityKit things?).
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
I am creating an ECS. With this ECS I will need to register several DragGesture.
Question: Is it possible to define DragGestures in ECS? If yes, how do we do that? If not, what is the best way to do that?
Question: Is there a "gesture" method that takes an array of gestures as a parameter?
I am interested in any information that can help me, if possible with an example of code.
Regards
Tof
Hello,
I am developing a visionOS application and am interested in obtaining detailed data of users’ hands through ARKit, including but not limited to Transform and rotation angle. I have reviewed Happy Beem, but it appears to only introduce the method of identifying the user’s specific gestures.
Could you please advise on how to obtain the Transform and rotation angle of the user’s hand?
Thank you.
I want to open the control center in Vision Pro’s Xcode simulator. Can I open it? If I can, please tell me how to do it. Thank you.
I have a mesh based animation 3D model, that means every frame it’s a new mesh. I import it into RealityView, but can’t play it‘s animation, RealityKit tells me this model has no animations by using print(entity.availableAnimations).
So, I was trying to animate a single bone using FromToByAnimation, but when I start the animation, the model instead does the full body animation stored in the availableAnimations.
If I don't run testAnimation nothing happens.
If I run testAnimation I see the same animation as If I had called
entity.playAnimation(entity.availableAnimations[0],..)
here's the full code I use to animate a single bone:
func testAnimation() {
guard let jawAnim = jawAnimation(mouthOpen: 0.4) else {
print("Failed to create jawAnim")
return
}
guard let creature, let animResource = try? AnimationResource.generate(with: jawAnim) else { return }
let controller = creature.playAnimation(animResource, transitionDuration: 0.02, startsPaused: false)
print("controller: \(controller)")
}
func jawAnimation(mouthOpen: Float) -> FromToByAnimation<JointTransforms>? {
guard let basePose else { return nil }
guard let index = basePose.jointNames.firstIndex(of: jawBoneName) else {
print("Target joint \(self.jawBoneName) not found in default pose joint names")
return nil
}
let fromTransforms = basePose.jointTransforms
let baseJawTransform = fromTransforms[index]
let maxAngle: Float = 40
let angle: Float = maxAngle * mouthOpen * (.pi / 180)
let extraRot = simd_quatf(angle: angle, axis: simd_float3(x: 0, y: 0, z: 1))
var toTransforms = basePose.jointTransforms
toTransforms[index] = Transform(
scale: baseJawTransform.scale * 2,
rotation: baseJawTransform.rotation * extraRot,
translation: baseJawTransform.translation
)
let fromToBy = FromToByAnimation<JointTransforms>(
jointNames: basePose.jointNames,
name: "jaw-anim",
from: fromTransforms,
to: toTransforms,
duration: 0.1,
bindTarget: .jointTransforms,
repeatMode: .none,
)
return fromToBy
}
PS: I can confirm that I can set this bone to a specific position if I use
guard let index = newPose.jointNames.firstIndex(of: boneName) ...
let baseTransform = basePose.jointTransforms[index]
newPose.jointTransforms[index] = Transform(
scale: baseTransform.scale,
rotation: baseTransform.rotation * extraRot,
translation: baseTransform.translation
)
skeletalComponent.poses.default = newPose
creatureMeshEntity.components.set(skeletalComponent)
This works for manually setting the bone position, so the jawBoneName and the joint-transformation can't be that wrong.
I am develop visionOS app. I am now very interested in Metal and Compositor Services, but I have not explored them in depth. I know that Metal has a higher degree of control freedom. I am wondering if using Compositor Services will have fewer functions than RealityKit in AR technology (such as scene reconstruction and understanding, hover effect, etc.).
In WWDC25 session What’s new for the spatial web, the presenter showed creating an immersive environment for a web page by adding to the page's HEAD section
<link rel="spatial-backdrop" href="office.usdz" environmentmap="lighting.hdr">
My first attempt failed, and I am trying to track down why.
Before I search all the potential failure paths, I wanted to ask the community,
Is this feature available in the latest visionOS 26 beta?
I haven't seen anyone talk about their use of the feature yet.
Topic:
Spatial Computing
SubTopic:
General
I have a scene that has been assembled in RCP but I'm losing the correct hierarchy and transforms when running the scene in the headset or the simulator.
This is in RCP:
This is at runtime with the debugger:
As you can see the "MAIN_WAGON" entity is gone and part of the hierarchy are now children of "TRAIN_ROOT" instead.
Another issue is that not only part of the hieararchy disappears, it also reverts back to default values of the transform instead of what is set in RCP:
This is in RCP:
This is in the simulator/headset:
I'm filing a feedback ticket too and will post the number here.
Anyone had a similar issue and found a fix or workaround ?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer
RealityKit
Reality Composer Pro
Similar to the visionOS Spatial Gallery app, I'm developing a visionOS app that will show spatial photos and videos. Is it possible to re-create the horizontal (or a vertical) scrolling functionality that shows spatial photos and spatial video previews? Does the Spatial Gallery app use private APIs to create this functionality? I've been looking at the Quick Look documentation and have been able to use the PreviewApplication to show a single preview, but do not see anything for a collection of files as the Spatial Gallery app presents in the scrolling view. Any insights or direction on how this may be done is greatly appreciated.
I am using HelloPhotogrammetry in Xcode
I can make one model with something like HelloPhotogrammetry.main([path_to_folder_of images, path_to_output/model.usdz, "-d", "medium", "-o", "unordered", "-f", "high" ])
But how would I request several models simultaneously? I only want to vary the detail.
[ ("/Users/you/Desktop/model_medium.usdz", detail: .medium), ("/Users/you/Desktop/model_full.usdz", detail: .full), ("/Users/you/Desktop/model_raw.usdz", detail: .raw ]
This modifier in visionOS 2.5 works perfectly with LazyVgrid inside a Stack in ScrollView:
.hoverEffect { effect, isActive, _ in
effect.scaleEffect(isActive ? 1.1 : 1.0)
But the grid does not scroll in visionOS 26 beta 1 unless the scaleEffect is commented out.
FB17941468
Hello everyone,
I've been trying for a few weeks now to convert a sequential series of meshes into a stop-motion animation in USDZ format.
In Unreal Engine, I’ve already figured out how to transform the sequential series of individual meshes into a smooth animation using the node system and arrays.
Unfortunately, the node system cannot be exported as a usdz animation logic in either Unreal or Blender.
Because of this, I have tried several other methods to incorporate the animation logic. Here’s what I’ve tried so far:
I attempted to create the animation in Blender with Render-/Viewports and mapping it to keyframes. However, in my experience, Viewports are not supported in the conversion.
I tried aligning the vertices of individual objects and merging the frames using the Shrinkwrap modifier in Blender, then setting up a morph animation with keyframes. However, because the individual meshes are too different, this results in artifacts, and manually editing each mesh is too difficult for me to handle.
I placed all individual meshes at the same position and animated them sequentially by scaling them from 0 to 100 in keyframes (Frame 1 is visible for 10 frames, then scales down at frame 11, while Frame 2 becomes visible at frame 11, and so on). I also adjusted the keyframes so that the scaling happens in a "constant" manner rather than the default Bezier or linear interpolation. I then converted this animation to .abc, and the result initially looked good. However, some information is lost when converting it with OpenUSD. The animation does not maintain its intended jump-like behavior in USDZ format, and instead, the scaling of individual files is visible in the animation.
I tried using a Blender add-on (StepMotion), which allows the animation to be exported as .abc, but it can only be read in Blender or Unreal. Even in the preview, the animation is not displayed correctly, so converting the animation logic does not work either.
Unfortunately, I have no alternative way to create the animation, as the individual frames have been provided to me as meshes. So far, I haven’t found a way to implement this successfully.
I would be very grateful for any tips or ideas, as I am running out of options on how to make this work.
Thanks in advance!
Topic:
Spatial Computing
SubTopic:
General
Tags:
Core Animation
Reality Converter
Visual Design
USDZ
With Xcode 26, loading ressources with RealityKit is extremely slow.
Here my project takes almost 50 seconds to load.
I also get multiple Hang detected messages in the console:
When I uncheck "Debug executable" in the schema, the same project loads in 2 seconds.
I'm using RealityKit asynchronous loading:
private static func loadFromRealityComposerPro(
named entityName: String,
fromSceneNamed sceneName: String
) async -> Entity? {
var entity: Entity?
do {
let scene = try await Entity(
named: sceneName,
in: visionPetsContentBundle
)
entity = scene.findEntity(named: entityName)
} catch {
print(
"Error loading \(entityName) from scene \(sceneName): \(error.localizedDescription)"
)
}
return entity
}
Anyone having the same problem?
Topic:
Spatial Computing
SubTopic:
General
Hi I am trying to implement something simple as people can share their Spatial Photos with others (just like this post). I encountered the same issue with him, but his answer doesn't help me out here.
Briefly speaking, I am using CGImgaeSoruce to extract paired leftImage and rightImage from one fetched spatial photo
let photos = PHAsset.fetchAssets(with: .image, options: nil)
// enumerating photos ....
if asset.mediaSubtypes.contains(PHAssetMediaSubtype.spatialMedia) {
spatialAsset = asset
}
// other code show below
I can fetch left and right images from native Spatial Photo (taken by Apple Vision Pro or iPhone 15+), but it didn't work on generated spatial photo (2D -> 3D feat in Photos).
// imageCount is 1 when it comes to generated spatial photo
let imageCount = CGImageSourceGetCount(source)
I searched over the net and someone says the generated version is having a depth image instead of left/right pair. But still I cannot extract any depth image from imageSource.
The full code below, the imagePair extraction will stop at "no groups found":
func extractPairedImage(phAsset: PHAsset, completion: @escaping (StereoImagePair?) -> Void) {
let options = PHImageRequestOptions()
options.isNetworkAccessAllowed = true
options.deliveryMode = .highQualityFormat
options.resizeMode = .none
options.version = .original
return PHImageManager.default().requestImageDataAndOrientation(for: phAsset, options: options) {
imageData, _, _, _ in
guard let imageData,
let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil)
else {
completion(nil)
return
}
let stereoImagePair = stereoImagePair(from: imageSource)
completion(stereoImagePair)
}
}
}
func stereoImagePair(from source: CGImageSource) -> StereoImagePair? {
guard let properties = CGImageSourceCopyProperties(source, nil) as? [CFString: Any] else {
return nil
}
let imageCount = CGImageSourceGetCount(source)
print(String(format: "%d images found", imageCount))
guard let groups = properties[kCGImagePropertyGroups] as? [[CFString: Any]] else {
/// function returns here
print("no groups found")
return nil
}
guard
let stereoGroup = groups.first(where: {
let groupType = $0[kCGImagePropertyGroupType] as! CFString
return groupType == kCGImagePropertyGroupTypeStereoPair
})
else {
return nil
}
guard let leftIndex = stereoGroup[kCGImagePropertyGroupImageIndexLeft] as? Int,
let rightIndex = stereoGroup[kCGImagePropertyGroupImageIndexRight] as? Int,
let leftImage = CGImageSourceCreateImageAtIndex(source, leftIndex, nil),
let rightImage = CGImageSourceCreateImageAtIndex(source, rightIndex, nil),
let leftProperties = CGImageSourceCopyPropertiesAtIndex(source, leftIndex, nil),
let rightProperties = CGImageSourceCopyPropertiesAtIndex(source, rightIndex, nil)
else {
return nil
}
return (leftImage, rightImage, self.identifier)
}
Any suggestion? Thanks
visionOS 2.4
Hi, I'm developing a virtual camera system using ReplayKit to capture scene video by directly accessing raw video buffers. The capture mechanism works flawlessly when repeatedly starting and stopping video capture within a continuous immersive environment. However, a critical issue arises when interrupting the immersive space:
Step 1: Enter immersive environment and start and stop capture videos(Multiple times with no issues)
Step 2: Press the crown button to exit the immersive environment
Step 3: Return to the immersive space subsequently
Step 4: Attempt to start the video capture
At this point, the startCapture method throws an unexpected error, disrupting the video capture workflow.
This is the Xcode error that I see " [ERROR] -[RPScreenRecorder startCaptureWithHandler:completionHandler:]_block_invoke_2:500 failed to start due to error: Error Domain=com.apple.ReplayKit.RPRecordingErrorDomain Code=-5803 "Recording failed to start" UserInfo={NSLocalizedDescription=Recording failed to start}"
I have tried all possible ways to stopCapture including OnDisappear and other methods and nothing seems to solve this.
I’m working with RealityView in visionOS and noticed that the content closure seems to run twice, causing content.add to be called twice automatically. This results in duplicate entities being added to the scene unless I manually check for duplicates. How can I fix that? Thanks.
Hello, I'm adding a CollisionComponent to an entity in RealityView. CollisionComponent requires that a Mesh must be provided as a reference for collision detection. However, in order to achieve more accurate detection, I hope that this Mesh resource is a geometric shape of a USDZ model. Is there any way to make it happen? Thank you!
Can an app made with the Room Plan API be used on iPhones without LIDAR? If so, how much accuracy would be lost compared to iPhones with LIDAR?
If not, is there an API similar to RoomPlan that works on iPhones without LiDAR?
Hi there,
I'm developing a visionOS app that is using the anchor points and mesh from SceneReconstructionProvider anchor updates. I load an ImmersiveSpace using a RealityView and apply a ShaderGraphMaterial (from a Shader Graph in Reality Composer Pro) to the mesh and use calls to setParameter to dynamically update the material on very rapid frequency. The mesh is locked (no more updates) before the calls to setParameter. This process works for a few minutes but then eventually I get the following error in the console:
assertion failure: Index out of range (operator[]:line 789) index = 13662, max = 1
With the following stack trace:
Thread 1 Queue : com.apple.main-thread (serial)
#0 0x00000002880f90d0 in __abort_with_payload ()
#1 0x000000028812a6dc in abort_with_payload_wrapper_internal ()
#2 0x000000028812a710 in abort_with_payload ()
#3 0x0000000288003f40 in _os_crash_msg ()
#4 0x00000001dc9ff624 in re::ecs2::ComponentBucketsBase::addComponent ()
#5 0x00000001dc9ffadc in re::ecs2::ComponentBucketsBase::moveComponent ()
#6 0x00000001dc8b0278 in re::ecs2::MaterialParameterBlockArrayComponentStateImpl::processPreparingComponents ()
#7 0x00000001dc8b05e4 in re::ecs2::MaterialParameterBlockArraySystem::update ()
#8 0x00000001dd008744 in re::Scheduler::executePhase ()
#9 0x00000001dc032ec4 in re::Engine::executePhase ()
#10 0x0000000248121898 in RCPSharedSimulationExecuteUpdate ()
#11 0x00000002264e488c in __59-[MRUISharedSimulation _doJoinWithConnectionContext:error:]_block_invoke.44 ()
#12 0x0000000268c5fe9c in _UIUpdateSequenceRunNext ()
#13 0x00000002696ea540 in schedulerStepScheduledMainSectionContinue ()
#14 0x000000026af8d284 in UC::DriverCore::continueProcessing ()
#15 0x00000001a1bd4e6c in CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION ()
#16 0x00000001a1bd4db0 in __CFRunLoopDoSource0 ()
#17 0x00000001a1bd44f0 in __CFRunLoopDoSources0 ()
#18 0x00000001a1bd3640 in __CFRunLoopRun ()
#19 0x00000001a1bce284 in _CFRunLoopRunSpecificWithOptions ()
#20 0x00000001eff12d2c in GSEventRunModal ()
#21 0x00000002697de878 in -[UIApplication _run] ()
#22 0x00000002697e33c0 in UIApplicationMain ()
#23 0x00000001b56651e4 in closure #1 (Swift.UnsafeMutablePointer<Swift.Optional<Swift.UnsafeMutablePointer<Swift.Int8>>>) -> Swift.Never in SwiftUI.KitRendererCommon(Swift.AnyObject.Type) -> Swift.Never ()
#24 0x00000001b5664f08 in SwiftUI.runApp<τ_0_0 where τ_0_0: SwiftUI.App>(τ_0_0) -> Swift.Never ()
#25 0x00000001b53ad570 in static SwiftUI.App.main() -> () ()
#26 0x0000000101bc7b9c in static MetalRendererApp.$main() ()
#27 0x0000000101bc7bdc in main ()
#28 0x0000000197fd0284 in start ()
Any advice on how to solve this or prevent the error?
Thanks!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer Pro
Shader Graph Editor