Problem Summary
After upgrading to iOS 26.1 and 26.2, I'm experiencing a particle positioning bug in RealityKit where ParticleEmitterComponent particles render at an incorrect offset relative to their parent entity. This behavior does not occur on iOS 18.6.2 or earlier versions, suggesting a regression introduced in the newer OS builds.
Environment Details
Operating System: iOS 26.1 & iOS 26.2
Framework: RealityKit
Xcode Version: 16.2 (16C5032a)
Expected vs. Actual Behavior
Expected: Particles should render at the position of the entity to which the ParticleEmitterComponent is attached, matching the behavior on iOS 18.6.2 and earlier.
Actual: Particles appear away from their parent entity, creating a visual misalignment that breaks the intended AR experience.
Steps to Reproduce
Create or open an AR application with RealityKit that uses particle components
Attach a ParticleEmitterComponent to an entity via a custom system
Run the application on iOS 26.1 or iOS 26.2
Observe that particles render at an offset position away from the entity
Minimal Code Example
Here's the setup from my test case:
Custom Component & System:
struct SparkleComponent4: Component {}
class SparkleSystem4: System {
static let query = EntityQuery(where: .has(SparkleComponent4.self))
required init(scene: Scene) {}
func update(context: SceneUpdateContext) {
for entity in context.scene.performQuery(Self.query) {
// Only add once
if entity.components.has(ParticleEmitterComponent.self) { continue }
var newEmitter = ParticleEmitterComponent()
newEmitter.mainEmitter.color = .constant(.single(.red))
entity.components.set(newEmitter)
}
}
}
AR Setup:
let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true)
let model = Entity()
model.components.set(ModelComponent(mesh: boxMesh, materials: [material]))
model.components.set(SparkleComponent4())
model.position = [0, 0.05, 0]
model.name = "MyCube"
let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: [0.2, 0.2]))
anchor.addChild(model)
arView.scene.addAnchor(anchor)
Questions for the Community
Has anyone else encountered this particle positioning issue after updating to iOS 26.1/26.2?
Are there known workarounds or configuration changes to ParticleEmitterComponent that restore correct positioning?
Is this a confirmed bug, or could there be a change in coordinate system handling or transform inheritance that I'm missing?
Additional Information
I've already submitted this issue via Feedback Assistant(FB21346746)
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
Hi, I'm trying to set the displayScale environment value for a RealityView, so it renders at 2x instead of 3x on the iPhone, but it seems to have no effect.
.environment(\.displayScale, 2.0)
Is this expected behavior, or a bug?
The reason I want it to render at 2x and not at the default 3x is for game optimization and performance.
I've tried out a ParticleEmitter in Reality Composer Pro to produce a burst of particles that don't move (i.e. speed close to zero).
When viewing from different angles, it clearly looks like the particles are rendered exactly in the wrong order, that is, front first and back last. In other words, back particles obscure front particles.
I would prefer it the correct way around.
I've only tried this interactively in Reality Composer Pro, not programmatically, but I assume I would get the same result.
My Reality Composer Pro "File" (zipped):
https://gert-rieger-edv.de/Posts/Post-1/RealityParticles.zip
Screenshot:
Click on the ParticleEmitter object, then on its Play button, then select the Particles tab and click on "Burst" a few times to get a few random particles.
Mac Studio 2025
Apple M4 Max
macOS 15.7.2 (24G325)
Reality Composer Pro
Version 2.0 (494.60.2)
Issue
When an Entity with a ViewAttachmentComponent is:
disabled using isEnabled = false
removed using removeFromParent()
and then enabled or added back again, the attached SwiftUI view is rendered correctly, but tap interactions stop working.
Specifically:
Button actions inside the attached view do not fire
TapGesture closures on child views do not respond
Expected Behavior
Tap interactions inside the attached view should continue to work after the Entity is re-enabled or re-added.
Actual Behavior
After being disabled or removed once, all tap interactions stop responding.
Comparison
When displaying the same SwiftUI view using RealityViewAttachments, this issue does not occur.
Removing and re-displaying the attachment still allows taps to work correctly.
Reproduction
Attached sample code reproduces the issue:
A RealityView with an Entity that has a ViewAttachmentComponent
The attached SwiftUI view contains a Toggle
The toggle updates isEnabled on the Entity
After toggling off and on, tap interactions stop responding
Environment
Xcode 26
visionOS 26
Question
Is this expected behavior of ViewAttachmentComponent, or a bug?
Is there a recommended way to temporarily hide or disable an Entity with ViewAttachmentComponent without breaking tap interactions?
import SwiftUI
import RealityKit
struct GestureTestView: View {
@State var sampleEnabled = true
@State var sampleEntity: Entity?
var body: some View {
RealityView { contents, attachments in
// After deleting and re-displaying it, taps no longer respond.
let sample = Entity(components: ViewAttachmentComponent(rootView: SampleView()))
// Executed successfully
//let sample = attachments.entity(for: "SampleView")!
contents.add(sample)
sample.position = [0, 1.2, -1]
sampleEntity = sample
let toggleButton = Entity(components: ViewAttachmentComponent(rootView: ToggleButtonView(isOn: $sampleEnabled)))
contents.add(toggleButton)
toggleButton.position = [0, 1, -1]
} update: { _, _ in
// run update closure
print(sampleEnabled)
// update sample entity enable
sampleEntity?.isEnabled = sampleEnabled
} attachments: {
Attachment(id: "SampleView") {
SampleView()
}
}
}
}
struct ToggleButtonView: View {
@Binding var isOn: Bool
var body: some View {
VStack {
Toggle(isOn: $isOn) {
Text("Toggle")
}
}
.padding()
.glassBackgroundEffect()
}
}
struct SampleView: View {
var body: some View {
VStack {
Button {
print("Hello, World!")
} label: {
Text("Hello, World!")
.padding()
}
}
.padding()
.glassBackgroundEffect()
}
}
#Preview(immersionStyle: .mixed) {
GestureTestView()
}
I am developing a custom app for Apple Vision Pro using Compositor Services to stream content from NVIDIA Omniverse. The app is based on:
https://github.com/NVIDIA-Omniverse/apple-configurator-sample
Environment:
Device: Apple Vision Pro
OS Version: visionOS 26.2
Xcode Version: 26.2
The Issue: The application crashes hard (__abort_with_payload) in "libsystem_kernel.dylib" on Task 6 immediately after initialization. This appears to be a deliberate abort triggered by the compositor, not a typical crash.
The issue occurs on both physical device and simulator.
Important detail:
The console output shows a specific CLIENT BUG assertion. By checking the metadata of the warning, I found that it is related to "Library: CompositorNonUI".
Relevant console output before abort:
Missed 'FrameLimiter' target of 90.0 Hz
running compositor services to get IPD, FOV, etc
fence tx observer 14f27 timed out after 0.600000
fence tx observer bc1b timed out after 0.600000
BUG IN CLIENT: For mixed reality experiences please use cp_drawable_compute_projection API
We are developing a standalone AI avatar application for hospital reception kiosks using Mac mini (M2/M4). The app runs on SwiftUI + RealityKit, displays on a 75-inch monitor, and utilizes a USB-connected 4K camera and external sensors (LiDAR/mmWave).
We have several technical concerns regarding the transition from iPadOS to macOS. Could you please provide insights on the following?
ARKit/Vision Framework on macOS with External Camera On iPadOS, ARKit provides robust Face Tracking. On macOS with an external USB 4K camera:
Can we achieve real-time face tracking (expression/gaze/depth) with Vision framework or ARKit comparable to iPadOS performance?
Are there any specific limitations for accessing the Neural Engine via Vision framework for real-time 4K video analysis on macOS?
Accessing External Hardware (LiDAR/Sensors) in Sandbox We plan to connect external LiDAR and mmWave sensors (e.g., Akara) via USB/Bluetooth.
Is it feasible to communicate with these custom drivers/devices within the App Sandbox environment?
Would DriverKit be required, or can we use standard serial communication APIs?
On-Device LLM (MLX) & Thermals We intend to run a local LLM (e.g., Llama 3 using MLX framework) for offline conversation, alongside 3D rendering.
With the M2/M4 Mac mini fan design, is there a risk of thermal throttling during 10+ hours of continuous operation (simultaneous CoreML + 3D rendering)?
Is the Mac Studio recommended over the Mac mini for this thermal profile?
Long-running Speech API
Are there any known issues (memory leaks, API limits) when using Spherch framework and AVSpeechSynthesizer continuously for over 10 hours daily?
3D Display Output
Are there any macOS constraints for rendering a SwiftUI window in a specific 3D format (e.g., Side-by-Side) and outputting it via HDMI to a 3D digital signage display (fixed refresh rate/resolution)?
Thank you for your assistance.
Topic:
Graphics & Games
SubTopic:
RealityKit
Hi everyone,
I'm new to visionOS development. I'm trying to create a physics-based scene (with gravity) where users can pick up and move objects on a workbench. I am struggling with physics interactions during the drag gesture:
Kinematic Mode: If I switch to .kinematic during the drag, the object moves smoothly but clips through other objects (no collisions).
Dynamic Mode: I tried keeping it .dynamic and applying linear velocity toward the hand position, but the movement feels laggy and unresponsive.
Hybrid Approach: I tried switching to .kinematic during DragGesture.onChange and back to .dynamic on collision, but this causes the entity to jitter/shake violently when touching other objects.
Has anyone found a clean way to drag objects while maintaining solid collisions.
Thanks for your help!
Hi. I'm trying to show 3D or AR content in multiple pages on an iOS app but I have found that if a RealityView is used earlier in a flow then future uses will never display the camera feed, even if those earlier uses do not use any spatial tracking.
For example, in the following simplified view, the second page realityView will not display the camera feed even though the first view does not use it not start a tracking session.
struct ContentView: View {
var body: some View {
NavigationStack {
VStack {
RealityView { content in
content.camera = .virtual
// showing model skipped for brevity
}
NavigationLink("Second Page") {
RealityView { content in
content.camera = .spatialTracking
}
}
}
}
}
}
What is the way around this so that 3D content can be displayed in multiple places in the app without preventing AR content from working?
I have also found the same problem when wrapping an ARView for use in SwiftUI.
My procedural animation using IKRig occasionally jitters and throws this warning. It completely breaks immersion.
getMoCapTask: Rig must be of type kCoreIKSolverTypePoseRetarget
Any idea where to start looking or what could be causing this?