A few users have recently reported no longer being able to capture point clouds using our app, specifically on iPhone 15 Pro devices. We recently found an in-house device that exhibits this behavior and found that the confidenceMap contains only low confidence values, regardless of the environment being captured. Our app uses a higher confidence threshold; setting the threshold to a lower value produces noisy results as expected, so that is a non-viable option.
Other LiDAR based apps have been tested with this device and the results are the same. No points, or noisy point clouds in apps that allow a lower confidence threshold setting. On devices that exhibit this behavior the "Displaying a point cloud using scene depth" Apple sample app can be used to visualize the issue.
First reports of this new behavior occurred as early as iOS 18.4.
Looking for recommendations on which team(s) at Apple to reach out to with these findings since the behavior manifests on only a small sample of devices.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Submitted feedback FB17075016
Xcode 16.3 release
macOS 15.4
iOS 18.4 / iPadOS 18.4 release builds
Memory graph fails to build when targeting physical devices running release builds referenced above. Memory graph will build against a simulator device.
Seeing a flurry of these in Xcode 15 Beta 8 while debugging an ARKit app
<<<< AVPointCloudData >>>> Fig assert: "_dataBuffer" at bail (AVPointCloudData.m:217) - (err=0)
M1 iPad Pro with iPadOS 16 Beta 3
Xcode 14.0 beta 3
In a freshly created Xcode 14 beta 2 app using the Augmented Reality App template with Content Technology set to Metal, ARWorldTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing returns a 60 fps, 1920 x 1440 video format.
So, session.captureHighResolutionFrame fails to deliver a high res frame.
In the function processLastArData() a command buffer is committed and the output of the last MPS is immediately assigned without issuing a waitUntilCompleted() on the buffer. What am I missing?
https://developer.apple.com/documentation/arkit/environmental_analysis/displaying_a_point_cloud_using_scene_depth?language=objc
Something about the Picker label seems to have changed in iPadOS 15. The following View uses the version of Picker's init() marked for deprecation, but changing the deployment target from 14.5 to 15.0 and using the new version doesn't change the behavior. The first .gif shows the expected behavior that we've been seeing until 15.0. Notice in the second .gif that the label is hidden, and that the frame modifier is ignored. An extra Text view has been added to show that the binding is working as expected.
iPadOS 14.5 target / Xcode 12.5.1
iPadOS 15.0 target / Xcode 13.0 Beta 5
struct OuterView: View {
@State var colorViewModel = ColorViewModel()
var body: some View {
ColorView(colorViewModel: colorViewModel)
}
}
class ColorViewModel: ObservableObject {
@Published var colors = ["yellow", "green", "blue"]
@Published var selectedColor = "Select a color"
}
struct ColorView: View {
@ObservedObject var colorViewModel: ColorViewModel
var body: some View {
VStack {
Picker(selection: $colorViewModel.selectedColor,
label: Text(colorViewModel.selectedColor)
.frame(maxWidth: .infinity, alignment: .leading)
.padding()
) {
ForEach(colorViewModel.colors, id: \.self) { color in
Text(color) }
}
.pickerStyle(MenuPickerStyle())
.border(Color.black, width: 2)
Text(colorViewModel.selectedColor)
}
.padding()
}
}
struct ColorView_Previews: PreviewProvider {
static var previews: some View {
OuterView()
}
}
What approach should be used instead in iOS 14?