Hello,
I am trying to use the subdivision mesh rendering option.
I can see it working in RealityComposerPro:
But not when loading asset and displaying in Simulator:
Using this code:
import SwiftUI
import RealityKit
import RealityKitContent
struct AirspaceView: View {
// MARK: - VIEW BODY
var body: some View {
RealityView { content in
if let a = try? await Entity(named: "Models/Test/Test.usdc", in: realityKitContentBundle) {
content.add(a)
}
}
}
}
Any ideas why?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
I'm writing an EntityAction that animates a material base tint between two different colours. However, the colour that is being actually set differs in RGB values from that requested.
For example, trying to set an end target of R0.5, G0.5, B0.5, results in a value of R0.735357, G0.735357, B0.735357. I can also see during the animation cycle that intermediate actual tint values are also incorrect, versus those being set.
My understanding is the the values of material base colour are passed as a SIMD4. Therefore I have a couple of helper extensions to convert a UIColor into this format and mix between two colours. Note however, I don't think the issue is with this functions - even if their outputs are wrong, the final value of the base tint doesn't match the value being set.
I wondered if this was a colour space issue?
import simd
import RealityKit
import UIKit
typealias Float4 = SIMD4<Float>
extension Float4 {
func mixedWith(_ value: Float4, by mix: Float) -> Float4 {
Float4(
simd_mix(x, value.x, mix),
simd_mix(y, value.y, mix),
simd_mix(z, value.z, mix),
simd_mix(w, value.w, mix)
)
}
}
extension UIColor {
var float4: Float4 {
var r: CGFloat = 0.0
var g: CGFloat = 0.0
var b: CGFloat = 0.0
var a: CGFloat = 0.0
getRed(&r, green: &g, blue: &b, alpha: &a)
return Float4(Float(r), Float(g), Float(b), Float(a))
}
}
struct ColourAction: EntityAction {
let startColour: SIMD4<Float>
let targetColour: SIMD4<Float>
var animatedValueType: (any AnimatableData.Type)? { SIMD4<Float>.self }
init(startColour: UIColor, targetColour: UIColor) {
self.startColour = startColour.float4
self.targetColour = targetColour.float4
}
static func registerEntityAction() {
ColourAction.subscribe(to: .updated) { event in
guard let animationState = event.animationState else { return }
let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime))
animationState.storeAnimatedValue(interpolatedColour)
}
}
}
extension Entity {
func updateColour(from currentColour: UIColor, to targetColour: UIColor, duration: Double, endAction: @escaping (Entity) -> Void = { _ in }) {
let colourAction = ColourAction(startColour: currentColour, targetColour: targetColour, endedAction: endAction)
if let colourAnimation = try? AnimationResource.makeActionAnimation(for: colourAction, duration: duration, bindTarget: .material(0).baseColorTint) {
playAnimation(colourAnimation)
}
}
}
The EntityAction can only be applied to an entity with a ModelComponent (because of the material), so it can be called like so:
guard
let modelComponent = entity.components[ModelComponent.self],
let material = modelComponent.materials.first as? PhysicallyBasedMaterial else
{
return
}
let currentColour = material.baseColor.tint
let targetColour = UIColor(_colorLiteralRed: 0.5, green: 0.5, blue: 0.5, alpha: 1.0)
entity.updateColour(from:currentColour, to: targetColour, duration: 2)
I have a scene built up in RealityComposerPro, in which I've added a ParticleEmitter with isEmitting set to False and 'Loop' set to True.
In my app, when I toggle isEmitting to True there can be a delay of a few seconds before the ParticleEmitter starts.
However, if I programatically add the emitter in code at that point, it starts immediately.
To be clear, I'm seeing this on the VisionOS simulator - I don't have access to a device at this time.
Am I misunderstanding how to control the ParticleEmitter when I need precise control on when it starts.
As I understand it there are two ways I can track a hand, or a joint, in RealityKit:
either, create an AnchorEntity, for example AnchorEntity(.hand(.left, location: .palm))
or, set up an ARSession with a HandTrackingProvider ( a lot more code which I haven't repeated here).
Assuming this is correct, when would I want to use one over the other?
In ARKit for visionOS, I can track the user's head with a HeadAnchor, but it will not give the location. However, I can get the device's transform by calling queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) on a WorldTrackingProvider.
Why the difference? - if I know the device's transform, I effectively know the head's transform.
I'm trying to understand a design pattern for accessing the isolated state held in an actor type from within a SwiftUI view.
Take this naive code:
actor Model: ObservableObject {
@Published var num: Int = 0
func updateNumber(_ newNum: Int) {
self.num = newNum
}
}
struct ContentView: View {
@StateObject var model = Model()
var body: some View {
Text("\(model.num)") // <-- Compiler error: Actor-isolated property 'num' can not be referenced from the main actor
Button("Update number") {
Task.detached() {
await model.updateNumber(1)
}
}
}
}
Understandably I get the compiler error Actor-isolated property 'num' can not be referenced from the main actor when I try and access the isolated value. Yet I can't understand how to display this data in a view. I wonder if I need a ViewModel that observes the actor, and updates itself on the main thread, but get compile time error Actor-isolated property '$num' can not be referenced from a non-isolated context.
class ViewModel: ObservableObject {
let model: Model
@Published var num: Int
let cancellable: AnyCancellable
init() {
let model = Model()
self.model = model
self.num = 0
self.cancellable = model.$num // <-- compile time error `Actor-isolated property '$num' can not be referenced from a non-isolated context`
.receive(on: DispatchQueue.main)
.sink { self.num = $0 }
}
}
Secondly, imagine if this code did compile, then I would get another error when clicking the button that the interface is not being updated on the main thread...again I'm not sure how to effect this from within the actor?
Had anyone experienced convexCast causing a crash and what might be behind it?
Here's the call stack:
When I run the debug memory graph from Xcode I'm getting a set of system objects with warnings.
Are there any tips on how to locate what might be causing these?
I implemented an EntityAction to change the baseColor tint - and had it working on VisionOS 2.x.
import RealityKit
import UIKit
typealias Float4 = SIMD4<Float>
extension UIColor {
var float4: Float4 {
if
cgColor.numberOfComponents == 4,
let c = cgColor.components
{
Float4(Float(c[0]), Float(c[1]), Float(c[2]), Float(c[3]))
} else {
Float4()
}
}
}
struct ColourAction: EntityAction {
// MARK: - PUBLIC PROPERTIES
let startColour: Float4
let targetColour: Float4
// MARK: - PUBLIC COMPUTED PROPERTIES
var animatedValueType: (any AnimatableData.Type)? { Float4.self }
// MARK: - INITIATION
init(startColour: UIColor, targetColour: UIColor) {
self.startColour = startColour.float4
self.targetColour = targetColour.float4
}
// MARK: - PUBLIC STATIC FUNCTIONS
@MainActor static func registerEntityAction() {
ColourAction.subscribe(to: .updated) { event in
guard let animationState = event.animationState else { return }
let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime))
animationState.storeAnimatedValue(interpolatedColour)
}
}
}
extension Entity {
// MARK: - PUBLIC FUNCTIONS
func changeColourTo(_ targetColour: UIColor, duration: Double) {
guard
let modelComponent = components[ModelComponent.self],
let material = modelComponent.materials.first as? PhysicallyBasedMaterial
else {
return
}
let colourAction = ColourAction(startColour: material.baseColor.tint, targetColour: targetColour)
if let colourAnimation = try? AnimationResource.makeActionAnimation(for: colourAction, duration: duration, bindTarget: .material(0).baseColorTint) {
playAnimation(colourAnimation)
}
}
}
This doesn't work in VisionOS 26. My current fix is to directly set the material base colour - but this feels like the wrong approach:
@MainActor static func registerEntityAction() {
ColourAction.subscribe(to: .updated) { event in
guard
let animationState = event.animationState,
let entity = event.targetEntity,
let modelComponent = entity.components[ModelComponent.self],
var material = modelComponent.materials.first as? PhysicallyBasedMaterial
else { return }
let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime))
material.baseColor.tint = UIColor(interpolatedColour)
entity.components[ModelComponent.self]?.materials[0] = material
animationState.storeAnimatedValue(interpolatedColour)
}
}
So before I raise this as a bug, was I doing anything wrong in the former version and got lucky? Is there a better approach?
Prior to MacOS 26, Multiple Pickers could be laid out with a uniform width. For example:
struct LayoutExample: View {
let fruits = ["apple", "banana", "orange", "kiwi"]
let veg = ["carrot", "cauliflower", "peas", "Floccinaucinihilipilification Cucurbitaceae"]
@State private var selectedFruit: String = "kiwi"
@State private var selectedVeg: String = "carrot"
var body: some View {
VStack(alignment: .leading) {
Picker(selection: $selectedFruit) {
ForEach(fruits, id: \.self, content: Text.init)
} label: {
Text("Fruity choice")
.frame(width: 150, alignment: .trailing)
}
.frame(width: 300)
Picker(selection: $selectedVeg) {
ForEach(veg, id: \.self, content: Text.init)
} label: {
Text("Veg")
.frame(width: 150, alignment: .trailing)
}
.frame(width: 300)
}
}
}
Renders like this, prior to MacOS26:
But now looks like this under MacOS 26:
Is there anyway to control the size of the picker selection in MacOS 26?
When using a TextField with axis to set to .vertical on iOS, it sets a bound selection parameter to an erroneous value. Whilst on MacOS it performs as expected.
Take the following code:
import SwiftUI
@main
struct SelectionTestApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
struct ContentView: View {
@FocusState private var isFocused: Bool
@State private var text = ""
@State private var textSelection: TextSelection? = nil
var body: some View {
TextField("Label", text: $text, selection: $textSelection, axis: .vertical)
.onChange(of: textSelection, initial: true) {
if let textSelection {
print("textSelection = \(textSelection)")
} else {
print("textSelection = nil")
}
}
.focused($isFocused)
.task {
isFocused = true
}
}
}
Running this on MacOS target gives the following on the console:
textSelection = nil
textSelection = TextSelection(indices: SwiftUI.TextSelection.Indices.selection(Range(0[any]..<0[any])), affinity: SwiftUI.TextSelectionAffinity.downstream)
Running the code on iOS gives:
textSelection = TextSelection(indices: SwiftUI.TextSelection.Indices.selection(Range(1[any]..<1[any])), affinity: SwiftUI.TextSelectionAffinity.upstream)
textSelection = TextSelection(indices: SwiftUI.TextSelection.Indices.selection(Range(1[any]..<1[any])), affinity: SwiftUI.TextSelectionAffinity.upstream)
Note here the range is 1..<1 - which is incorrect.
Also of side interest this behaviour changes if you remove the axis parameter:
textSelection = nil
Am I missing something, or is this a bug?
I have found that following code runs without issue from Xcode, either in Debug or Release mode, yet crashes when running from the binary produced by archiving - i.e. what will be sent to the app store.
import SwiftUI
import AVKit
@main
struct tcApp: App {
var body: some Scene {
WindowGroup {
VideoPlayer(player: nil)
}
}
}
This is the most stripped down code that shows the issue. One can try and point the VideoPlayer at a file and the same issue will occur.
I've attached the crash log:
Crash log
Please note that this was seen with Xcode 26.2 and MacOS 26.2.