Hello,
I've been looking all over the place but so far I haven't found a trivial way to grab ARView's current timestamp. So basically the elapsed time since the scene started rendering.
I can access that in the Surface and Geometry shaders but I would like to pass a timestamp as a parameter in order to drive shader animations. I feel that's more efficient than injecting animation progress manually on every frame, especially if there are lot's of objects with that shader.
So far what I've done is subscribing to the Scenes Update event and using the delta time to calculate the elapsed time myself. But this is quite error prone and tends to break when I present the scene a second time (e.g closing and reopening to AR experience).
The only other option I found was using a render callback and to grab the time property from the PostProcessContext. That works well but do I really have to go that route?
Would be great if there is an easy way to achieve this.
Pretty much an equivalent to this: https://developer.apple.com/documentation/scenekit/scnscenerenderer/1522680-scenetime
NOTE: I'm not looking for the timestamp of the ARSessions current frame.
Thank you!
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
after a recent Apple Maps update last week the satellite imagery data now looks a lot worse than before.
What previously looked lifelike and lush now looks very sad and wintery. Also the contrast seems way to extreme.
Attached is a sample image.
FB: FB11716831
Any chance that this could be reverted to the old version?
Hello,
I recently converted from manual dictionary-based JSON Serialisation to Codable and noticed that this resulted in a pretty significant growth in binary size (I looked that up via App Thinning Size Report).
The difference from codable to non codable is ~800KB.
As our app also supports App Clips I now can't fulfill the 10MB Universal Bundle Size limit anymore.
Is there any way I can make this a little more lean? Would it help to manually implement all Codable methods instead of relying on compiler synthetization?
Thanks for any hints!
Hello,
I've done a couple tests and noticed that when I run an ARKit Session with for example a world- or geo-tracking configuration which has environment texturing set to either .manual or .automatic and I walk around with the device for an extended distance there is a pretty noticeable increase in memory usage. I'm not implying that it's a leak but it seems like the system creates lots and lots of environment probes and does not remove older ones.
An example:
This is a barebones Xcode RealityKit starter project that spawns a couple cubes with an ARGeoTracking configuration. After ~7mins of walking around (roughly a distance of 100-200 meters) it uses an additional 300MBs of RAM.
I've seen cases where users walked around for a while and the app eventually crashed because of this.
When environment texturing is disabled RAM usage pretty much stays the same, no matter how far I walk.
Is there a recommended way on how to handle this? Should I remove probes manually after a while or eventually disable environment texturing altogether at some point (will that preserve the current cube map)?
I would appreciate any guidance.
With the Xcode example project you can easily recreate the issue by modifying it just a little and then walk around for a while with statistics enabled.
class ViewController: UIViewController {
@IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
arView.automaticallyConfigureSession = false
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
let config = ARWorldTrackingConfiguration()
config.environmentTexturing = .manual
arView.session.run(config)
// Load the "Box" scene from the "Experience" Reality File
let boxAnchor = try! Experience.loadBox()
// Add the box anchor to the scene
arView.scene.anchors.append(boxAnchor)
}
}
Using OcclusionMaterial on macOS and iOS works fine in Non-AR mode when I set the background to just a simple color (https://developer.apple.com/documentation/realitykit/arview/environment-swift.struct/color) but when I set a custom skybox (https://developer.apple.com/documentation/realitykit/arview/environment-swift.struct/background-swift.struct/skybox(_:)) the OcclusionMaterial renders as fully black. I would expect it to properly occlude the content and show through the skybox behind it.
This happens with box ARView and RealityView. On current iOS/macOS Betas as well as on older systems, e.g iOS 17 and macOS Sonoma.
Feedback ID: FB15081053
I have a working Xcode Cloud setup for my iOS and macOS targets, and I'm trying to add visionOS support. The issue is that Firebase requires using their source distribution for visionOS (instead of their default binary distribution).
Locally, this works by launching Xcode with:
open -a Xcode --env FIREBASE_SOURCE_FIRESTORE project.xcodeproj
For Xcode Cloud, I've added a ci_post_clone.sh script that sets this environment variable for visionOS builds:
#!/bin/bash
if [[ $CI_PRODUCT_PLATFORM == "xrOS" ]]; then
echo "Running setup for visionOS..."
export FIREBASE_SOURCE_FIRESTORE=1
fi
However, I'm getting the following error during build:
an out-of-date resolved file was detected at /.../project.xcworkspace/xcshareddata/swiftpm/Package.resolved, which is not allowed when automatic dependency resolution is disabled
So since setting FIREBASE_SOURCE_FIRESTORE=1 changes which SPM dependencies are required:
Normal setup uses: abseil-cpp-binary, grpc-binary
Source distribution needs: abseil-cpp-swiftpm, grpc-ios, boringssl-swiftpm
What's the recommended way to handle this in Xcode Cloud when maintaining builds for all platforms? Should I be using separate workflows with different branches for different platforms? Or is there a better approach?
System:
Xcode 16.2
Using SPM for dependency management
Firebase iOS SDK 10.29.0
Building for iOS, macOS, and visionOS
Thanks in advance for any guidance!
Topic:
Developer Tools & Services
SubTopic:
Xcode Cloud
Tags:
Swift Packages
Xcode
Xcode Cloud
visionOS
When assigning a ManipulationComponent to an Entity SceneEvents.WillRemoveEntity will be called for that Entity.
Expected Behavior: the Entity is not (even if temporarily) removed from the Scene and no SceneEvents will be triggered as a result of assigning a ManipulationComponent.
FB20872220
Hi, I've just migrated to Swift Tools 6.2 and package traits, and I'm encountering an issue when using traits with multiple targets in the same Xcode workspace.
Setup:
Main iOS app target
App Clip target
Both consume the same local packages (e.g., UIComponents)
What I'm trying to achieve:
Main app imports packages without the COMPACT_BUILD trait
App Clip imports packages with the COMPACT_BUILD trait enabled
Package configuration (simplified):
// UIComponents/Package.swift
let package = Package(
name: "UIComponents",
platforms: [.iOS(.v18)],
traits: [
.trait(name: "COMPACT_BUILD", description: "Minimal build for App Clips"),
],
// ...
targets: [
.target(
name: "UIComponents",
dependencies: [...],
swiftSettings: [
.define("COMPACT_BUILD", .when(traits: ["COMPACT_BUILD"])),
]
),
]
)
In the code:
#if COMPACT_BUILD
// Excluded from App Clip
#endif
The consumer packages:
Main app's package imports without trait:
.package(path: "../UIComponents")
App Clip's package imports with trait:
.package(path: "../UIComponents", traits: ["COMPACT_BUILD"])
The problem:
When building the main app target, the COMPACT_BUILD compiler condition is unexpectedly active — even though the main app's dependency chain never enables that trait. It seems like the trait enabled by the App Clip target is "leaking" into the main app build.
I confirmed this by adding #error("COMPACT_BUILD is active") — it triggers when building the main app, which shouldn't happen.
If I disable the App Clip target from the build scheme, the main app builds correctly with COMPACT_BUILD not defined.
I am also able to build the App Clip separately.
Environment:
Xcode 26.2
swift-tools-version: 6.2
iOS 26.2
Questions:
Is this expected behavior with Xcode's SPM integration? Are traits resolved workspace-wide rather than per-target?
Is there a workaround to have different trait configurations for different targets consuming the same package?
Or do I need to fall back to separate package targets (e.g., UIComponents and UIComponentsCompact) to achieve this?
Any guidance would be appreciated. Thanks!