Hi,
since iOS 15 I've repeatedly noticed the console warning »ARSessionDelegate is retaining X ARFrames. This can lead to future camera frames being dropped« even for rather simple projects using RealityKit and ARKit. Could someone from the ARKit team please elaborate what causes this warning and what can be done to avoid it?
If I remember correctly I didn't even assign an ARSessionDelegate.
Thank you!
ARKit
RSS for tagIntegrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
Hi everyone! I am working on AR app and wanted to implement object occlusion because it removes drift pretty much from the object. This working great with RealityKit sample But I am unable to replicate such behaviour it with scenekit. Because scenekit does not offer object occlusion. Can we say scenekit is getting depricated, and we should re-write app in RealityKit (which is obviously a big task)?
Error:
RoomCaptureSession.CaptureError.exceedSceneSizeLimit
Apple Documentation Explanation:
An error that indicates when the scene size grows past the framework’s limitations.
Issue:
This error is popping up in my iPhone 14 Pro (128 GB) after a few roomplan scans are done. This error shows up even if the room size is small. It occurs immediately after I start the RoomCaptureSession after the relocalisation of previous AR session (in world tracking configuration). I am having trouble understanding exactly why this error shows and how to debug/solve it.
Does anyone have any idea on how to approach to this issue?
We are working on a world scale AR app that leverages the device location and heading to place objects in the streets, so that they are correctly and stably anchored to certain locations.
Since the geo-tracking imagery is only available in certain cities and areas, we are trying to figure out how to fallback when geo-tracking is not available as the device move away, to still retain good AR camera accuracy. We might need to come up with some algorithm using the device GPS, to line up the ARCamera with our objects.
Question: Does geo-tracking always provide greater than or equal to the accuracy of world tracking, for a GPS outdoor AR experience?
If so, we can simply use the ARGeoTrackingConfiguration for the entire time, and rely on the ARView keeping itself aligned. Otherwise, we need to switch between it and ARWorldTrackingConfiguration when geo-tracking is not available and/or its accuracy is low, then roll our own algorithm to keep the camera aligned.
Thanks.
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces.
This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there?
I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
Hello Community,
I'm encountering an issue with the latest iOS 17 update, specifically related to RoomPlan version-2. In iOS 16, when using RoomPlan version-1, we were able to display stairs in our app. However, after upgrading to iOS 17 and implementing RoomPlan version-2, the stairs are no longer visible.
Despite thorough investigation, I couldn't find any option within the code to show or hide stairs, or any other objects for that matter. It seems like a specific issue with the update rather than a coding error on our part.
Has anyone else encountered a similar problem? If so, I would greatly appreciate any insights or solutions you might have. It's crucial for our app functionality to have stairs displayed accurately, and we're currently at a loss on how to address this issue.
Thank you in advance for any assistance you can provide.
Best regards
What is the reason the hand-tracking joints have these axes? I'm trying to create a virtual hands model and that's a mess.
Hello All,
I'm desperate to found a solution and I need your help please.
I've create a simple cube in Vision OS. I can get it by hand (close my hand on it) and move it pretty where I want. But, I would like to throw it (exemple like a basket ball). Not push it, I want to have it in hand and throw it away of me with a velocity and direction = my hand move (and finger opened to release it).
Please put me on the wait to do that.
Cheers and thanks
Mathis
Topic:
Spatial Computing
SubTopic:
ARKit
Devices running iOS 18 using RealityKit do not seem to receive lighting supplied via ARKit Environment Texturing (https://developer.apple.com/documentation/arkit/arworldtrackingconfiguration/2977509-environmenttexturing).
Instead just a default IBL is used by RealityKit.
This happens with RealityView as well as ARView.
It also happens when I explicitly opt-in to environment texturing:
let worldTrackingConfig = ARWorldTrackingConfiguration()
worldTrackingConfig.environmentTexturing = .automatic
arView.session.run(worldTrackingConfig)
Even the Xcode AR Template has this issue.
I'm attaching a screenshot of the sample app running on iOS 18 where it's broken and from iOS 17 where it works as expected.
I hope this can get resolved quickly since I see it as a major regression.
Feedback ID: FB15091335
UPDATE:
It works on my older iPhone XS (iOS 18 22A5282m)
Broken on iPad Pro (11-inch) (3rd generation) (iPadOS 18.0 (22A5350a))
Maybe it's related to LiDAR?
Thank you!
iOS 17 (works):
iOS 18 (broken):
After upgrading to Xcode 16 my app, which utilizes imported project files from my iPad's Reality Composer app, now has two issues that I have found so far. I am using an ARView as a UIViewRepresentable with SwiftUI. (Prior to upgading to Xcode 16 everything worked well.)
First, there are now several duplicate rcp_export.usdz resources in the "Copy Bundle Resources" build phase section. Even though each file is in a separate folder with a unique UUID, it was causing a compile error saying there are duplicate files. I was able to open the RC project folder and delete the older rcp_project versions which now allows the app to compile. I mention it as it may or may not be related to the second issue.
Second, Xcode isn't generating the project code for rcproject, so when I call the RCProject.loadSceneAsync function I am getting an error that says "Cannot find 'RCProject' in scope"
I am working on a project that requires access to the main camera on the Vision Pro. My main account holder applied for the necessary enterprise entitlement and we were approved and received the Enterprise.license file by email. I have added the Enterprise.license file to my project, and manually added the com.apple.developer.arkit.main-camera-access.allow entitlement to the entitlement file and set it to true since it was not available in the list when I tried to use the + Capability button in the Signing & Capabilites tab.
I am getting an error: Provisioning profile "iOS Team Provisioning Profile: " doesn't include the com.apple.developer.arkit.main-camera-access.allow entitlement. I have checked the provisioning profile settings online, and there is no manual option for adding the main camera access entitlement, and it does not seem to be getting the approval from the license.
We are using the ARKit image tracking feature on visionOS 2.0 with three pre-registered images. The image tracking works, but only one image is actively tracked at a time. When more than one target image is visible to the camera, it has difficulty detecting and tracking the other images.
Is this the expected behavior in visionOS, or is there something we need to do to resolve this issue?
I have a VideoMaterial inside a RealityView and want to attach this to a DockingRegion inside an immersive environment.
It appears that adding the VideoMaterial entity as a child of the docking region somewhat works, but there are no lighting effects (specular, diffuse) from the playing video.
So essentially, how can you add a VideoMaterial to a DockingRegion and achieve the same reflections/behavior as using AVPlayerViewController.
The latter is not an option as I need custom controls.
the following documentation tells me that the CameraFrame.Sample.Parameters.extrinsics is of type simd_float4x4, great!
https://developer.apple.com/documentation/arkit/cameraframe/sample/parameters/4443449-extrinsics
I have read in the answer of another post that this extrinsics represents the pose of the physical camera relative to the device anchor.
Did I understand correctly that the device anchor is where the scene is rendered from onto the user's display?
What is the coordinate system in which this offset is defined, which axis is left, which one is up, which one is forward?
The last column of the extrinsics seems to define a translation of approximately 2 cm along the x axis, -2cm along the y axis and -5 cm along the z axis. I tried to measure the physical distance between the main left and right cameras in order to find out if it's rather 2cm or 5 cm from the "middle", it looks more like 5, so I assume that the z axis is looking towards the right (from the user's perspective). Is that so? For x and y, I assume that the physical camera is approximately 2 cm to the front of the user and 2cm to the bottom, which of x and y is horizontal, which on vertical?
How is the camera image indexed, is it row-major and is the origin on the top left?
I am looking forward to learning about all the details on these extrinsics in order to make use of it.
We're developing a VisionOS application, where we would like to do product recognition (like food items).
We have enterprise entitlements and therefore also main camera access for VisionOS. We send this live camera frames to a trained CoreML model where we will receive 2D coordinates from the model detection prediction.
Now, we would like to create a 3D anchor on the detected items so it can be visible for user. The 3D anchor is going to be the class name of the detected item.
How do we transform this 2D coordinate from the model prediction to a 3D anchor?
We have a plane model (basecircle)without physics and rigid body components, and no gestures are implemented. However, when tapped, the model unexpectedly falls into infinity.
func fetchEnvResource(){
var simpleMaterial = SimpleMaterial()
env = try! Entity.loadModel(named: "bgMain5")
env.position += [0,0,-10]
envTexture = PhysicallyBasedMaterial()
envTextureUnlit = UnlitMaterial()
envTexture.baseColor = .init(texture: .init(try! .load(named: "bgMain5")))
envTextureUnlit.color = .init(texture: .init(try! .load(named: "bgMain5")))
env.isEnabled = false
let anchor = AnchorEntity(world: [0, 0, -3])
baseCircle = ModelEntity(mesh: .generatePlane(width: 1.5, depth: 1.5, cornerRadius: 0.75), materials: [SimpleMaterial(color: .green, isMetallic: false)])
env.components.set(InputTargetComponent())
baseMaterial = PhysicallyBasedMaterial()
baseMaterialUnlit = UnlitMaterial()
baseMaterial.baseColor = .init(texture: .init(try! .load(named: "groundTexture")))
baseMaterial.baseColor.tint = UIColor(white: 1.0, alpha: CGFloat(textureOpacity))
baseCircle.model?.materials = [baseMaterial]
baseCircle.generateCollisionShapes(recursive: false)
baseCircle.components.set(InputTargetComponent())
baseCircle.components[PhysicsBodyComponent.self] = .init(PhysicsBodyComponent(massProperties: .default, mode: .static))
baseCircle.physicsBody = PhysicsBodyComponent(
mode: .kinematic
)
anchorEntity.addChild(baseCircle)
baseCircle.position = [0,0,-3]
baseCircle.isEnabled = false
let cylinder = ModelEntity(mesh: .generateCylinder(height: 0.2, radius: 0.5), materials: [SimpleMaterial(color: .blue, isMetallic: false)])
cylinder.position = [0,-0.1,0]
cylinder.generateCollisionShapes(recursive: false)
cylinder.components[PhysicsBodyComponent.self] = .init(PhysicsBodyComponent(massProperties: .default, mode: .static))
cylinder.physicsBody = nil
cylinder.scale = [500, 100, 100]
anchor.addChild(cylinder)
}
The plane model in this issue is the BaseCircle. Any suggestions on how to solve this or potential fixes would be greatly appreciated
Hey, I have Enterprise Access on the account and have added the passthrough capability and the entitlement on the main project and the "Broadcast Upload" extension, too.
The broadcast works except it returns a black screen.
I am attaching some screenshots below of the entitlement file. I have tried searching online to no avail, so any help would be greatly appreciated. I am also attaching the code.
import Foundation
import AVFoundation
import ReplayKit
class VideoAssetWriter {
private var isRecording = false
private var outputStream: OutputStream?
private func setupConnection() {
guard outputStream == nil else { return }
print("setting up connection.")
let serverIP = macIP
let port = 12345
var readStream: Unmanaged<CFReadStream>?
var writeStream: Unmanaged<CFWriteStream>?
CFStreamCreatePairWithSocketToHost(kCFAllocatorDefault,
serverIP as CFString,
UInt32(port),
&readStream,
&writeStream)
guard let writeStream = writeStream?.takeRetainedValue() else {
print("Failed to create write stream")
return
}
self.outputStream = writeStream as OutputStream
self.outputStream?.open()
}
func startRecording() {
isRecording = true
}
func processVideoSampleBuffer(_ sampleBuffer: CMSampleBuffer) {
print("Processing Sample 1")
guard isRecording else { return }
print("Processing Sample 2")
sendVideoChunkToServer(sampleBuffer)
}
private func sendVideoChunkToServer(_ sampleBuffer: CMSampleBuffer) {
guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
print("Processing Sample 3")
let ciImage = CIImage(cvPixelBuffer: imageBuffer)
let context = CIContext()
guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return }
print("Processing Sample 4")
let image = UIImage(cgImage: cgImage)
if let imageData = image.jpegData(compressionQuality: 0.5) {
guard imageData.count <= 10_000_000 else {
print("Frame too large: \(imageData.count) bytes")
return
}
if outputStream == nil {
setupConnection()
}
print("sending frame size up connection.")
// Convert to network byte order (big-endian)
var frameSize = UInt32(imageData.count).bigEndian
let sizeData = Data(bytes: &frameSize, count: MemoryLayout<UInt32>.size)
_ = sizeData.withUnsafeBytes { outputStream?.write($0.baseAddress!.assumingMemoryBound(to: UInt8.self), maxLength: sizeData.count) }
print("sending image data up connection.")
// Send frame data
_ = imageData.withUnsafeBytes { outputStream?.write($0.baseAddress!.assumingMemoryBound(to: UInt8.self), maxLength: imageData.count) }
}
}
func stopRecording() {
isRecording = false
outputStream?.close()
outputStream = nil
}
}
This is the broadcast picker view wrapper:
// Broadcast Picker View wrapper
struct BroadcastButtonView: UIViewRepresentable {
func makeUIView(context: Context) -> RPSystemBroadcastPickerView {
let broadcastPickerView = RPSystemBroadcastPickerView(
frame: CGRect(x: 0, y: 0, width: 200, height: 200)
)
// Make sure this matches your broadcast extension bundle identifier
broadcastPickerView.preferredExtension = "my-extension-bundle-identifier"
broadcastPickerView.showsMicrophoneButton = false
return broadcastPickerView
}
func updateUIView(_ uiView: RPSystemBroadcastPickerView, context: Context) {
}
}
The extension SampleHandler:
override func broadcastPaused() {
print("paused broadcast")
// User has requested to pause the broadcast. Samples will stop being delivered.
}
override func broadcastResumed() {
print("resumed broadcast")
// User has requested to resume the broadcast. Samples delivery will resume.
}
override func processSampleBuffer(_ sampleBuffer: CMSampleBuffer, with sampleBufferType: RPSampleBufferType) {
print("broadcast received")
assetWriter?.processVideoSampleBuffer(sampleBuffer)
}
Looking forward to any and all help.
Information Property list:
Information property list for the extension:
The capabilities:
After I played the audio for the entity the sound was very low and I wanted to adjust the sound size. No api is found. What should I do
if let audio = audioResources {
entity.playAudio(audio)
}
Hello All,
We're going to do a scene now, kind of like a time travel door. When the user selects the scene, the user passes through the door to show the current scene. The changes in the middle need to be more natural. It's even better if you can walk through an immersive space...
There is very little information now. How can I start doing this? Is there any information I can refer to
thanks
Dear Apple Engineers,
I am working on a project in visionOS and need to implement a curved surface effect for video playback, where the width of the surface can be dynamically adjusted. Specifically, I want the video to be displayed on a curved surface (similar to a scroll unfolding), and the user should be able to adjust the width of this surface.
I have the following specific questions:
How can I implement a curved surface for video playback and ensure the video content is not stretched or distorted on the surface?
How can I create a dynamic curved surface (such as a bending plane) in RealityKit or visionOS, where the width can be adjusted by the user?
Is it possible to achieve more complex curved surface effects (such as scroll unfolding or bending) using Shaders or other techniques?
Thank you very much for your help!