RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Posts under RealityKit subtopic

Post

Replies

Boosts

Views

Activity

Scene with over 12,000 duplicate entities
I am creating a RealityKit scene that will contain over 12,000 duplicate cubes arranged in a circle (see image below). This is for some high-energy physical simulations I am doing. I accomplish this scene by creating a single cube and cloning it a bunch of times. So, I there is a single MeshResource and Material even though there are a lot of entities. I have confirmed this by checking with Swift's === operator. Even with this, the program is unworkably slow. Any suggestions or tricks that could help with this type of scene? Using a single geometry was the trick to getting SceneKit to work fast with geometries like this. I've been updating my software to RealityKit because I far prefer the structure of RealityKit over SceneKit.
4
0
1.3k
Nov ’21
Apple's Choice: USDZ over Other 3D File Formats like GLTF
Hello Dev Community, I've been thinking over Apple's preference for USDZ for AR and 3D content, especially when there's the widely used GLTF. I'm keen to discuss and hear your insights on this choice. USDZ, backed by Apple, has seen a surge in the AR community. It boasts advantages like compactness, animation support, and ARKit compatibility. In contrast, GLTF too is a popular format with its own merits, like being an open standard and offering flexibility. Here are some of my questions toward the use of USDZ: Why did Apple choose USDZ over other 3D file formats like GLTF? What benefits does USDZ bring to Apple's AR and 3D content ecosystem? Are there any limitations of USDZ compared to other file formats? Could factors like compatibility, security, or integration ease have influenced Apple's decision? I would love to hear your thoughts on this. Feel free to share any experiences with USDZ or other 3D file formats within Apple's ecosystem!
4
1
6.0k
Jun ’23
RealityKit visionOS anchor to POV
Hi, is there a way in visionOS to anchor an entity to the POV via RealityKit? I need an entity which is always fixed to the 'camera'. I'm aware that this is discouraged from a design perspective as it can be visually distracting. In my case though I want to use it to attach a fixed collider entity, so that the camera can collide with objects in the scene. Edit: ARView on iOS has a lot of very useful helper properties and functions like cameraTransform (https://developer.apple.com/documentation/realitykit/arview/cameratransform) How would I get this information on visionOS? RealityViews content does not seem offer anything comparable. An example use case would be that I would like to add an entity to the scene at my users eye-level, basically depending on their height. I found https://developer.apple.com/documentation/realitykit/realityrenderer which has an activeCamera property but so far it's unclear to me in which context RealityRenderer is used and how I could access it. Appreciate any hints, thanks!
9
6
5.1k
Jun ’23
How to convert `DragGesture().onEnded`'s velocity CGSize to the SIMD3<Float> required in `PhysicsMotionComponent(linearVelocity, angularVelocity)`?
So if I drag an entity in RealityView I have to disable the PhysicsBodyComponent to make sure nothing fights dragging the entity around. This makes sense. When I finish a drag, this closure gets executed: .gesture( DragGesture() .targetedToAnyEntity() .onChanged { e in // ... } .onEnded { e in let velocity: CGSize = e.gestureValue.velocity } If I now re-add PhysicsBodyComponent to the component I just dragged, and I make it mode: .dynamic it will loose all velocity and drop straight down through gravity. Instead the solution is to apply mode: .kinematic and also apply a PhysicsMotionComponent component to the entity. This should retain velocity after letting go of the object. However, I need to instatiate it with PhysicsMotionComponent(linearVelocity: SIMD3<Float>, angularVelocity: SIMD3<Float>). How can I calculate the linearVelocity and angularVelocity when the e.gestureValue.velocity I get is just a CGSize? Is there another prop of gestureValue I should be looking at?
1
0
780
Aug ’23
How to attach point cloud(or depth data) to heic?
I'm developing 3D Scanner works on iPad. I'm using AVCapturePhoto and Photogrammetry Session. photoCaptureDelegate is like below: extension PhotoCaptureDelegate: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { let fileUrl = CameraViewModel.instance.imageDir!.appendingPathComponent("\(PhotoCaptureDelegate.name)\(id).heic") let img = CIImage(cvPixelBuffer: photo.pixelBuffer!, options: [ .auxiliaryDepth: true, .properties: photo.metadata ]) let depthData = photo.depthData!.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) let colorSpace = CGColorSpace(name: CGColorSpace.sRGB) let fileData = CIContext().heifRepresentation(of: img, format: .RGBA8, colorSpace: colorSpace!, options: [ .avDepthData: depthData ]) try? fileData!.write(to: fileUrl, options: .atomic) } } But, Photogrammetry session spits warning messages: Sample 0 missing LiDAR point cloud! Sample 1 missing LiDAR point cloud! Sample 2 missing LiDAR point cloud! Sample 3 missing LiDAR point cloud! Sample 4 missing LiDAR point cloud! Sample 5 missing LiDAR point cloud! Sample 6 missing LiDAR point cloud! Sample 7 missing LiDAR point cloud! Sample 8 missing LiDAR point cloud! Sample 9 missing LiDAR point cloud! Sample 10 missing LiDAR point cloud! The session creates a usdz 3d model but scale is not correct. I think the point cloud can help Photogrammetry session to find right scale, but I don't know how to attach point cloud.
3
2
1.2k
Nov ’23
Animated USD file | Starting position of the USD is not the same as first frame of animation
Hello all, I am building for visionOS with another engineer and using Reality Composer Pro to validate usd files. The starting position of my animated usdz, its position when it's first loaded, is not the same as the first frame of the animation on the usdz file For testing, I am using the AR Quick Look asset 'toy_biplane_idle.usdz' which demonstrates the same 'error' we're currently getting with our own usdz files. When the usdz is loaded, it is on the ground plane - But when the aniamtion is played, the plane 'snaps' to the position of the first frame of the animation - This 'snapping' behavior is giving us problems. We want the user ot see this plane in its static 'load' position with the option to play the animation. But we dont want it to snap when the user presses play Is it possible to load the .usdz in the position specified by the first frame of the animation? What is the best way to fix this issue. Thanks!
1
0
874
Jan ’24
How can I simultaneously apply the drag gesture to multiple entities?
I wanted to drag EntityA while also dragging EntityB independently. I've tried to separate them by entity but it only recognizes the latest drag gesture RealityView { content, attachments in ... } .gesture( DragGesture() .targetedToEntity(EntityA) .onChanged { value in ... } ) .gesture( DragGesture() .targetedToEntity(EntityB) .onChanged { value in ... } ) also tried using the simultaneously but didn't work too, maybe i'm missing something .gesture( DragGesture() .targetedToEntity(EntityA) .onChanged { value in ... } .simultaneously(with: DragGesture() .targetedToEntity(EntityB) .onChanged { value in ... } )
1
1
616
Feb ’24
physx cache crash using generated static collision with many entities
I'm using RealityKit for a scene with many static and dynamic ModelEntitys simulating physics. When all the entities have simple collision generated from .generateCollisionShapes I don't see any issues, but for some entities I need much more complex and accurate collision. For this I've been using ShapeResource.generateStaticMesh with the mesh's data (2769 positions, 16272 face indices in this case), which works exactly as desired with a low entity count. However once there are 600+ dynamic entities introducing even one static entity with complex collision will reliably trigger a crash when colliding with one of the dynamic entities (not necessarily on first contact, but inevitably after multiple collisions). If I arbitrarily limit the number of entities to a max of around 500 it seems to prevent the issue from happening, though the likelihood seems to increase with the number of entities so there may be a low probability of it triggering even at 500 entities that I haven't hit while testing. If physx imposes some kind of entity or collision face/shape limit or something like that I'd at least like to know exactly what it is, but ideally there's a way to work around this. Right now my "fix" is just arbitrarily restricting the entity count in a way that limits what my app can do. The crash triggers inside 0x00000001a6790dfc in physx::PxcDiscreteNarrowPhasePCM(physx::PxcNpThreadContext&, physx::PxcNpWorkUnit const&, physx::Gu::Cache&, physx::PxsContactManagerOutput&) () which looks like this (crash line has an -> arrow at the bottom) CoreRE`physx::PxcDiscreteNarrowPhasePCM: ... 0x1a6790df0 <+668>: mov x1, x24 0x1a6790df4 <+672>: bl 0x1a67913d8 ; physx::PxcNpCacheStreamPair::reserve(unsigned int) 0x1a6790df8 <+676>: ldrb w8, [x23] -> 0x1a6790dfc <+680>: str w8, [x0, #0x20]
2
0
532
Mar ’24
Disable Foveation for ImmersiveSpace?
Does anyone know how I can disable foveation for an ImmersiveSpace? I'm aware that I could use a CompositorLayer and my own Metal rendering to control foveation, but I'm hoping that I can configure an existing/underlying LayerRenderer (or similar) to disable it for an immersive scene. Or if there's another approach I should be taking, any pointers are appreciated. Thank you!
1
1
587
Mar ’24
Does anyone know if HDR video is supported in a RealityView?
I have attempted to use VideoMaterial with HDR HLS stream, and also a TextureResource.DrawableQueue with rgba16Float in a ShaderGraphMaterial. I'm capturing to 64RGBAHalf with AVPlayerItemVideoOutput and converting that to rgba16Float. I don't believe it's displaying HDR properly or behaving like a raw AVPlayer. Since we can't configure any EDR metadata or color space for a RealityView, how do we display HDR video? Is using rgba16Float supposed to be enough? Is expecting the 64RGBAHalf capture to handle HDR properly a mistake and should I capture YUV and do the conversion myself? Thank you
7
0
1.4k
Jun ’24
Particle Systems flicker when partly behind transparent objects
I am having a difficult time to create particle systems in Reality Composer Pro (visionOS beta 3). They tend to start to flicker and all particles disappear and reappear in semi-random intervals. I can clearly see that happening with one effect that I put inside a small box consisting of 4 transparent walls that has a solid floor. When I change the view angle the particle system starts to flicker when viewed from below its emission height. I tried all combinations of particle rendering: billboard->free, additive etc and it does not change anything. I am using the default particle image. Any help appreciated
2
0
791
Jul ’24
Game Center breaks RealityView world tracking
Has anyone come across the issue that setting GKLocalPlayer.local.authenticateHandler breaks a RealityView's world tracking on iOS / iPadOS 18 beta 5? I'm in the process of upgrading my app to make use of the much appreciated RealityView unification, using RealityView not only on visionOS but now also on iOS and iPadOS. In my RealityView, I enable world tracking on iOS like this: content.camera = .worldTracking However, device position and orientation were ignored (the camera remained static) and there was no camera pass-through. Then I discovered that the issue disappeared when I remove the line GKLocalPlayer.local.authenticateHandler = { viewController, error in // ... some more code ... } So I filed FB14731139 and hope that it will be resolved before the release of iOS / iPadOS 18.
5
1
834
Aug ’24
Dynamic loading various USDZ files
I have various USDZ files in my visionOS app. Loading the USDZ files works quite well. I only have problems with the positioning of the 3D model. For example, I have a USDZ file that is displayed directly above me. I can't move the model or perform any other actions on it. If I sit on a chair or stand up again, the 3D model automatically moves with me. This is my source code for loading the USDZ files: struct ImmersiveView: View { @State var modelName: String @State private var loadedModel = Entity() var body: some View { RealityView { content in if let usdModel = try? await Entity(named: modelName) { print("====> \(modelName) : \(usdModel) <====") let bounds = usdModel.visualBounds(relativeTo: nil).extents usdModel.scale = SIMD3<Float>(1.0, 1.0, 1.0) usdModel.position = SIMD3<Float>(0.0, 0.0, 0.0) usdModel.components.set(CollisionComponent(shapes: [.generateBox(size: bounds)])) usdModel.components.set(HoverEffectComponent()) usdModel.components.set(InputTargetComponent()) loadedModel = usdModel content.add(usdModel) } } } } I only want the 3D models from the USDZ files to be displayed later, and later on, to be able to move them via gestures. Moving the models is step 2. First, I need to make sure the models are displayed correctly. What have I forgotten or done wrong?
2
0
1k
Aug ’24
Allow RealityView to render at 120 fps?
I'm building an iOS/iPadOS app for iOS 18+ using the new RealityView in SwiftUI. (I may add visionOS, but I'm not focusing on it right now.) The 3D scene I'm rendering is fairly simple (just a few dozen vertices and a couple of textures), and I'd like to render it at 120fps on ProMotion devices if possible. I tried setting CADisableMinimumFrameDurationOnPhone to true in the info plist, but it had no effect. The frame rate in the GPU Report in Xcode stays capped at 60fps, and the gauge even tops out at 60. My question is kind of the opposite of this post, which asks how to limit the frame rate of a RealityView. I'm on Xcode 16 beta 5 on macOS Sonoma and iOS 18.0 beta 6 on my iPhone 15 Pro.
1
0
871
Aug ’24
Reality Composer Pro 2.0 shader graphs can't be loaded on visionOS 1
Using Reality Composer Pro 2.0, I created a simple shader graph that displays a texture on an unlit surface: On visionOS 2 beta, I can successfully use ShaderGraphMaterial(named:from:in:) to load that shader graph material and assign it to a model entity. However, on visionOS 1.2 and earlier, either in Simulator or on the device, ShaderGraphMaterial(named:from:in:) fails and I see the following logged to the console: If, using Reality Composer Pro 1.0, I experimentally open the same project and delete and recreate exactly the same nodes above, then ShaderGraphMaterial(named:from:in:) works as expected on visionOS 1.2. Is it a known issue that Reality Composer 2 can't be used with visionOS 1? Is this intentional behavior? I've submitted feedback as FB14828873, including a sample project and repro steps. If possible, I would appreciate guidance from an Apple engineer, like "This is a known issue for [list of node types]" or "Reality Composer Pro 2 is not supported for visionOS 1 development, please refer to [documentation]" or "We recommend [workaround]." Thank you.
7
0
1.4k
Aug ’24
Pink Screen with VideoMaterial in ARKit
Hi everyone, I'm developing an ARKit app using RealityKit and encountering an issue where a video displayed on a 3D plane shows up as a pink screen instead of the actual video content. Here's a simplified version of my setup: func createVideoScreen(video: AVPlayerItem, canvasWidth: Float, canvasHeight: Float, aspectRatio: Float, fitsWidth: Bool = true) -> ModelEntity { let width = (fitsWidth) ? canvasWidth : canvasHeight * aspectRatio let height = (fitsWidth) ? canvasWidth * (1/aspectRatio) : canvasHeight let screenPlane = MeshResource.generatePlane(width: width, depth: height) let videoMaterial: Material = createVideoMaterial(videoItem: video) let videoScreenModel = ModelEntity(mesh: screenPlane, materials: [videoMaterial]) return videoScreenModel } func createVideoMaterial(videoItem: AVPlayerItem) -> VideoMaterial { let player = AVPlayer(playerItem: videoItem) let videoMaterial = VideoMaterial(avPlayer: player) player.play() return videoMaterial } Despite following the standard process, the video plane renders pink. Has anyone encountered this before, or does anyone know what might be causing it? Thanks in advance!
16
3
1.1k
Aug ’24
VisionOS crashes Loading Entities from Disk Bundles with EXC_BREAKPOINT
Summary: I’m working on a VisionOS project where I need to dynamically load a .bundle file containing RealityKit content from the app’s Application Support directory. The .bundle is saved to disk after being downloaded or retrieved as an On-Demand Resource (ODR). Sample project with the issue: Github repo. Play the target test-odr to use with the local bundle and have the crash. Overall problem: Setup: Add a .bundle named RealityKitContent_RealityKitContent.bundle to the app’s resources. This bundle contains a Reality file with two USDA,: “Immersive” and “Scene”. Save to Disk: save the bundle to the Application Support directory, ensuring that the file is correctly copied and saved. Load the Bundle: load the bundle from the saved URL using Bundle(url: bundleURL) to initialize the Bundle object. Load Entity from Bundle: load a specific entity (“Scene”) from the bundle. When trying to load the entity using let storedEntity = try await Entity(named: "Scene", in: bundle), the app crashes with an EXC_BREAKPOINT error. ContentsOf Method Issue: If I use the Entity.load(contentsOf:realityFileURL, withName: entityName) method, it always loads the first root entity found (in this case, “Immersive”) rather than “Scene”, even when specifying the entity name. This is why I want to use the Bundle to load entities by name more precisely. Issue: The crash consistently occurs on the Entity(named: "Scene", in: bundle) line. I have verified that the bundle exists and is accessible at the specified path and that it contains the expected .reality file with multiple entities (“Immersive” and “Scene”). The error code I get is EXC_BREAKPOINT (code=1, subcode=0x1d135d4d0). What I’ve Tried: • Ensured the bundle is properly saved and accessible. • Checked that the bundle is initialized correctly from the URL. • Tested loading the entity using the contentsOf method, which works fine but always loads the “Immersive” entity, ignoring the specified name. Hence, I want to use the Bundle-based approach to load multiple USDA entities selectively. Question: Has anyone faced a similar issue or knows why loading entities using Entity(named:in:) from a disk-based bundle causes this crash? Any advice on how to debug or resolve this, especially for managing multiple root entities in a .reality file, would be greatly appreciated.
1
0
655
Sep ’24
Getting vector3 or vector4 per-vertex data into Shader Graph Editor
Hi, I'm trying to understand how I can get 3- or 4-channel per-vertex data into the Graph Editor. From my tests, it seems that: the "Geometric Property" node does not give access to 4-channel data, "Geometric Property (vector3)" does not give me access to custom properties besides the ones defined in MaterialX core the "Texture Coordinates" node has a vector4f mode (yay!), but according to MaterialX spec, texcoords must have 2 or 3 channels, and I can't get 4-channel data to show up there either. My assumption so far is that I must be missing some "magic" – for example, do the primvars in a file have to be in a specific order, independent of their names? Or do their names matter? (E.g. convention would be primars:st and primvars:st1 and so on) Unfortunately the forum doesn't allow me to attach any USDZ or ZIP files or GDrive links; if there's a way to share a test file I'm happy to do so!
0
0
621
Sep ’24
sceneUnderstanding occlusion not work with blendMode = alpha in ios 18
If I import a USDZ model with blendMode set to alpha, occlusion does not work on iPhone with iOS 18. How should transparent materials and occlusion be properly used in the new RealityKit? Additionally, new artifacts have appeared when working with transparent objects overlapping each other. The transparency results do not blend but rather parts of the model just not rendering.
6
0
1.1k
Sep ’24
Exception while creating a PhotogrammetrySession object with PhotogrammetrySamples
Hi, I am trying to use PhotogrammetrySample input sequence according to the WWDC video. I modified only the following code in the sample: var images = try FileManager.default.contentsOfDirectory(at: captureFolderManager.imagesFolder, includingPropertiesForKeys: nil, options: .skipsHiddenFiles) let inputSequence = images.lazy.compactMap { file in return self.loadSampleAndMask(file: file) } // Next line causes the exception photogrammetrySession = try PhotogrammetrySession( input: inputSequence, configuration: configuration) private func loadSampleAndMask(file: URL) -> PhotogrammetrySample? { do { var sample = try PhotogrammetrySample(contentsOf: file) return sample } catch { return nil } } I am getting following runtime error: Thread 1: EXC_BREAKPOINT (code=1, subcode=0x240d88904)
0
0
484
Sep ’24