Hi everyone,
I’m encountering a memory overflow issue in my visionOS app and I’d like to confirm if this is expected behavior or if I’m missing something in cleanup.
App Context
The app showcases apartments in real scale using AR.
Apartments are heavy USDZ models (hundreds of thousands of triangles, high-resolution textures).
Users can walk inside the apartments, and performance is good even close to hardware limits.
Flow
The app starts in a full immersive space (RealityView) for selecting the apartment.
When an apartment is selected, a new ImmersiveSpace opens and the apartment scene loads.
The scene includes multiple USDZ models, EnvironmentResources, and dynamic textures for skyboxes.
When the user dismisses the experience, we attempt cleanup:
Nulling out all entity references.
Removing ModelComponents.
Clearing cached textures and skyboxes.
Forcing dictionaries/collections to empty.
Despite this cleanup, memory usage remains very high.
Problem
After dismissing the ImmersiveSpace, memory does not return to baseline.
Check the attached screenshot of the profiling made using Instruments:
Initial state: ~30MB (main menu).
After loading models sequentially: ~3.3GB.
Skybox textures bring it near ~4GB.
After dismissing the experience (at ~01:00 mark): memory only drops slightly (to ~2.66GB).
When loading the second apartment, memory continues to increase until ~5GB, at which point the app crashes due to memory pressure.
The issue is consistently visible under VM: IOSurface in Instruments. No leaks are detected.
So it looks like RealityKit (or lower-level frameworks) keeps caching meshes and textures, and does not free them when RealityView is ended. But for my use case, these resources should be fully released once the ImmersiveSpace is dismissed, since new apartments will load entirely different models and textures.
Cleanup Code Example
Here’s a simplified version of the cleanup I’m doing:
func clearAllRoomEntities() {
for (entityName, entity) in entityFromMarker {
entity.removeFromParent()
if let modelEntity = entity as? ModelEntity {
modelEntity.components.removeAll()
modelEntity.children.forEach { $0.removeFromParent() }
modelEntity.clearTexturesAndMaterials()
}
entityFromMarker[entityName] = nil
removeSkyboxPortals(from: entityName)
}
entityFromMarker.removeAll()
}
extension ModelEntity {
func clearTexturesAndMaterials() {
guard var modelComponent = self.model else { return }
for index in modelComponent.materials.indices {
removeTextures(from: &modelComponent.materials[index])
}
modelComponent.materials.removeAll()
self.model = modelComponent
self.model = nil
}
private func removeTextures(from material: inout any Material) {
if var pbr = material as? PhysicallyBasedMaterial {
pbr.baseColor.texture = nil
pbr.emissiveColor.texture = nil
pbr.metallic.texture = nil
pbr.roughness.texture = nil
pbr.normal.texture = nil
pbr.ambientOcclusion.texture = nil
pbr.clearcoat.texture = nil
material = pbr
} else if var simple = material as? SimpleMaterial {
simple.color.texture = nil
material = simple
}
}
}
Questions
Is this expected RealityKit behavior (textures/meshes cached internally)?
Is there a way to force RealityKit to release GPU resources tied to USDZ models when they’re no longer used?
Should dismissing the ImmersiveSpace automatically free those IOSurfaces, or do I need to handle this differently?
Any guidance, best practices, or confirmation would be hugely appreciated.
Thanks in advance!
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
hi guys,
I'm working in VFX industry and I've got the question that, is it possible to create immersive video directly from virtual scene created in DCC software like maya, rendered into footage, then coded into immersive video, and finally play in in vision pro?
thanks.
Topic:
Spatial Computing
SubTopic:
General
Using Xcode v26 Beta 6 on macOS v26 Beta 25a5349a
When pressing on the home button of the visionOS simulator, I am not positioned in the middle of the room like would normally be. This occurred when moving a lot in the space to find an element added to an ImmersiveSpace.
How to resolve: restart simulator device.
See attached the pictures of the visionOSSimulatorCorrectHomePosition and the visionOSSimulatorMisallignedHomePosition.
Entity.animate() makes entity animation much easier, but in many cases, I want to break the progress because of some gestures, I couldn't find any way to do this, including tried entity.stopAllAnimations(), I have to wait till Entity.animate() completes.
iOS 26 / visionOS 26
iOS currently restricts background Bluetooth advertising and scanning in order to preserve battery life and protect user privacy. While these restrictions serve important purposes, they also limit legitimate use cases where users have explicitly opted in to proximity-based experiences.
The core challenge is that modern social applications need a way to detect when users are physically present at the same location or event without requiring every participant to keep their app in the foreground. Under the current system, background BLE advertising is heavily throttled and can only transmit a limited payload, background scanning intervals are sparse and unpredictable, peer-to-peer proximity detection cannot be maintained reliably when apps are in the background, and Background App Refresh is non-deterministic, making any kind of time-based proximity validation impossible.
A proposed enhancement would be to introduce an “Enhanced Proximity Permission.” This would allow developers to enable reliable background BLE advertising and scanning for declared time windows, such as a maximum of eight hours. It would also allow devices running the same app to detect each other’s proximity using ephemeral, rotating identifiers that preserve privacy, with clear user consent and prominent indicators whenever the feature is active.
Unlocking this capability would open up new categories of applications. Live events could offer automatic attendance tracking at concerts, conferences, or sports venues. Retail environments could support opt-in foot traffic analysis and dwell-time insights. Social apps could allow users to find friends at festivals, campuses, or other large venues. Safety applications could extend to crowd density monitoring and contact tracing beyond COVID-era needs. Gaming could offer real-world multiplayer experiences based on physical proximity, and transportation providers could verify rideshare pickups or measure public transit flows automatically.
Privacy safeguards would remain central. Permissions would be time-boxed and expire after an event or session. A mandatory visual indicator would be displayed whenever proximity tracking is active. A user-facing dashboard would show all apps granted enhanced proximity access. Permissions would automatically be revoked after a period of non-use, and only ephemeral tokens not permanent identifiers would be broadcast.
The industry impact would be significant. With this enhancement, iOS could power the next generation of location-aware social platforms while maintaining Apple’s leadership in privacy through explicit user control and transparency. Current alternatives, such as requiring users to keep apps in the foreground or deploying dedicated hardware beacons, produce poor user experiences and constrain innovation in spatial computing and social applications.
Can anyone from Apple consider this change? Having to buy iBeacons is brutal and means slower adoption. Please reconsider this for users who opt in.
We can add ornaments to popovers shown by PresentationComponent, but I’m not sure if we should.
While working on the editor for entities in a Volume-based app, I had the idea to add ornaments to the presented views. The entire app exists inside a volume. A user can tap a item to present a popoverUI to edit it. This is displayed using the new PresentationComponent in visionOS 26.
Ornaments have a new attachment anchor option this year: .parent().
.ornament(attachmentAnchor: .parent(.top), ornament: {...})
This works well in the Simulator. We can add ornaments around this popover view just like we would with a window.
Unfortunately, when I run this on device I get a different experience. Any part of the ornament that overlaps with the popover content isn’t rendered correctly. Sometimes it entirely disappears, other times it becomes partially transparent.
We could use content alignment to try to make sure the ornament doesn’t overlap the popover content.
.ornament(attachmentAnchor: .parent(.top), contentAlignment: .bottom, ornament: {...})
This works sometimes–but not all the time. It’s not clear if this is a bug or not, because I’m not sure if we are even supposed to be able to use ornaments in this way. Here is my hierarchy:
An app opens as a Volume
Volume presenting a RealityView, with its own ornament using .scene() anchor
Multiple Entities with Presentation Component show an edit view
The view uses .parent() anchor to add ornaments.
What makes me unsure is that other methods for drawing UI in RealityView don’t seem to work with ornaments. For example, if I add an attachment to show a view with the ornament–even when I use the .parent() anchor–the ornament is anchor to the volume, not the attachment view.
So what do we think? Is this a rendering bug? Are ornaments intended to work with attachments and presentations?
I have a mesh based animation 3D model, that means every frame it’s a new mesh. I import it into RealityView, but can’t play it‘s animation, RealityKit tells me this model has no animations by using print(entity.availableAnimations).
If I long press on an element, the sidebar disappears and then a Done appears on the screen, but nothing else changes, so what are the Environments in Vision Pro's Simulator?
I want to open the control center in Vision Pro’s Xcode simulator. Can I open it? If I can, please tell me how to do it. Thank you.
Spatial photo in RealityView has a default corner radius. I made a parallel effect with spatial photos in ScrollView(like Spatial Gallery), but the corner radius disappeared on left and right spatial photos. I've tried .clipShape and .mask modifiers, but they did't work. How to clip or mask spatial photo with corner radius effect?
I want an AR character to be able to look at a position while still playing the characters animation.
So far, I managed to manually adjust a single bone rotation using
skeletalComponent.poses.default = Transform(
scale: baseTransform.scale,
rotation: lookAtRotation,
translation: baseTransform.translation
)
which I run at every rendering update, while a full body animation is running.
But of course, hardcoding single joints to point into a direction (in my case the head) does not look as nice, as if I were to run some inverse cinematic that includes, hips + neck + head joints.
I found some good IKRig code in Composing interactive 3D content with RealityKit and Reality Composer Pro.
But when I try to adjust rigs while animations are playing, the animations are usually winning over the IKRig changes to the mesh.
So, I was trying to animate a single bone using FromToByAnimation, but when I start the animation, the model instead does the full body animation stored in the availableAnimations.
If I don't run testAnimation nothing happens.
If I run testAnimation I see the same animation as If I had called
entity.playAnimation(entity.availableAnimations[0],..)
here's the full code I use to animate a single bone:
func testAnimation() {
guard let jawAnim = jawAnimation(mouthOpen: 0.4) else {
print("Failed to create jawAnim")
return
}
guard let creature, let animResource = try? AnimationResource.generate(with: jawAnim) else { return }
let controller = creature.playAnimation(animResource, transitionDuration: 0.02, startsPaused: false)
print("controller: \(controller)")
}
func jawAnimation(mouthOpen: Float) -> FromToByAnimation<JointTransforms>? {
guard let basePose else { return nil }
guard let index = basePose.jointNames.firstIndex(of: jawBoneName) else {
print("Target joint \(self.jawBoneName) not found in default pose joint names")
return nil
}
let fromTransforms = basePose.jointTransforms
let baseJawTransform = fromTransforms[index]
let maxAngle: Float = 40
let angle: Float = maxAngle * mouthOpen * (.pi / 180)
let extraRot = simd_quatf(angle: angle, axis: simd_float3(x: 0, y: 0, z: 1))
var toTransforms = basePose.jointTransforms
toTransforms[index] = Transform(
scale: baseJawTransform.scale * 2,
rotation: baseJawTransform.rotation * extraRot,
translation: baseJawTransform.translation
)
let fromToBy = FromToByAnimation<JointTransforms>(
jointNames: basePose.jointNames,
name: "jaw-anim",
from: fromTransforms,
to: toTransforms,
duration: 0.1,
bindTarget: .jointTransforms,
repeatMode: .none,
)
return fromToBy
}
PS: I can confirm that I can set this bone to a specific position if I use
guard let index = newPose.jointNames.firstIndex(of: boneName) ...
let baseTransform = basePose.jointTransforms[index]
newPose.jointTransforms[index] = Transform(
scale: baseTransform.scale,
rotation: baseTransform.rotation * extraRot,
translation: baseTransform.translation
)
skeletalComponent.poses.default = newPose
creatureMeshEntity.components.set(skeletalComponent)
This works for manually setting the bone position, so the jawBoneName and the joint-transformation can't be that wrong.
When using the new RealityKit Manipulation Component on Entities, indirect input will never translate the entity - no matter what settings are applied. Direct manipulation works as expected for both translation and rotation.
Is this intended behaviour? This is different from how indirect manipulation works on Model3D. How else can we get translation from this component?
visionOS 26 Beta 2
Build from macOS 26 Beta 2 and Xcode 26 Beta 2
Attached is replicable sample code, I have tried this in other projects with the same results.
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "MovieFilmReel", in: reelRCPBundle) {
ManipulationComponent.configureEntity(immersiveContentEntity, allowedInputTypes: .all, collisionShapes: [ShapeResource.generateBox(width: 0.2, height: 0.2, depth: 0.2)])
immersiveContentEntity.position.y = 1
immersiveContentEntity.position.z = -0.5
var mc = ManipulationComponent()
mc.releaseBehavior = .stay
immersiveContentEntity.components.set(mc)
content.add(immersiveContentEntity)
}
}
}
This post documents an issue I reported in feedback FB19610114 and see if anyone knows of a workaround. Here is a copy of the feedback.
Short version
Manipulation (SwiftUI OR RealityKit) fails to translate entities after changing rooms. By changing rooms, I mean a human wearing an Apple Vision Pro leaving one room and entering another room. Once this issue occurs, it impacts all apps that use these features. A device restart is the only solution I have to fix it.
Feedback FB19610114
This is an odd one. I'm using the new Manipulation Component in visionOS 26. Most of the time this works well. Sometime it stops working and when it does the only way to get it working again is to reboot the headset.
When this happens, I can continue to rotate and scale items, but translation no longer works. It is as if the item is stuck to a fixed point in the parent scene (window, volume, etc). When this bug occurs, it affects every app across the entire operating system that is using manipulation, including the RealityKit component AND the SwiftUI version. This is not limited to one app and is not limited to apps that I am working on. Once this error occurs, it affects literally any application across the operating system that is using this API, including apps from Apple.
I won't speculate on the cause of this, but I do know of one way where I can always get it to happen.
Here is how to reproduce it:
Make an Xcode project with a single entity that uses the Manipulation Component. There is no need to customize the configuration of this component. The default implementation will work.
Build and run this app on device. You can keep running from device or quit and launch the app like normal on device.
Open the app and manipulate the entity - it should work as expected.
Physically walk into another room. It is vital that you leave the current room that you are in and enter a different room entirely.
Use the digital crown to recenter your view and bring your window or volume to you.
Test the manipulation on the entity again - it should still be working as expected at this point.
Physically, move yourself and your headset into the original room where you started.
Use the digital crown to recenter your view and bring your window or volume to you.
Test the manipulation on the entity again - you should now see the issue.
When I follow the steps above, then 100% of the time manipulation translation stops working at this point. It will impact any application using this API. The only way to fix it is to restart my headset.
A few points to keep in mind
It does not matter if an app is actively being run from Xcode.
When this occurs, it impacts every single app, not just one.
When this occurs, rotation and scaling continue to work, but the entity/view cannot be translated.
This impacts BOTH the SwiftUI version and the RealityKit version.
When this occurs, the only way to "fix" it is to reboot the device.
On Xcode 26 and visionOS 26, apple provides observable property for Entity, so we can easily interact with Entity between RealityScene and SwiftUI, but there is a issue:
It's fine to observe Entity's position and scale properties in Slider, but can't observe orientation properties in Slider.
MacBook Air M2 / Xcode 26 beta6
I’m working with RealityView in visionOS and noticed that the content closure seems to run twice, causing content.add to be called twice automatically. This results in duplicate entities being added to the scene unless I manually check for duplicates. How can I fix that? Thanks.
I have a visionOS 2 project created on Xcode 16, when I updated to Xcode 26 beta5, I can't build it any more, every time it stuck in process like the picture shows below:
Already tried many methods to fix this issue, such as clear build folders, but don't work.
MacBook Air M2 / MacOS 26 beta5 / Xcode 26 beta5
Hello,
I've been tinkering a bit with TextComponent.
Based on the docs it seems like this component should always render sharp and nice text, no matter how close the user gets:
RealityKit dynamically adjusts the backing size to a value that results in high-fidelity text at its current location.
And it does on visionOS, but on iOS and macOS the text gets pixelated when I get close to it, as if its just rendering it once as a plain image texture.
Can anyone tell me if this is expected behavior or a bug?
Here two screenshots for comparison (iPhone and Vision Pro):
Thanks!
In WWDC25 session What’s new for the spatial web, the presenter showed creating an immersive environment for a web page by adding to the page's HEAD section
<link rel="spatial-backdrop" href="office.usdz" environmentmap="lighting.hdr">
My first attempt failed, and I am trying to track down why.
Before I search all the potential failure paths, I wanted to ask the community,
Is this feature available in the latest visionOS 26 beta?
I haven't seen anyone talk about their use of the feature yet.
Topic:
Spatial Computing
SubTopic:
General
I've been struggling with this for far too long so I've decided to finally come here and see if anyone can point me to the documentation that I'm missing. I'm sure it's something so simple but I just can't figure it out.
I can SharePlay our test app with my brother (device to device) but when I open a volumetric window, it says "not shared" under it. I assume this will likely fix the video sharing problem we have as well. Everything else works so smooth but SharePlay has just been such a struggle for me. It's the last piece to the puzzle before we can put it on the App Store.