Post

Replies

Boosts

Views

Activity

Reply to Object Capture With only manual capturing
I tried to initialize session with bellow. //MARK: Initialize Camera private func initializeCamera() { print("Initialize Camera") currentCamera = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .depthData, position: .back) currentSession = AVCaptureSession() currentSession.sessionPreset = .photo do { let cameraInput = try AVCaptureDeviceInput(device: currentCamera) currentSession.addInput(cameraInput) } catch { fatalError() } let videoOutput = AVCaptureVideoDataOutput() videoOutput.setSampleBufferDelegate(self, queue: currentDataOutputQueue) videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA] currentSession.addOutput(videoOutput) let depthOutput = AVCaptureDepthDataOutput() depthOutput.setDelegate(self, callbackQueue: currentDataOutputQueue) depthOutput.isFilteringEnabled = true currentSession.addOutput(depthOutput) currentPhotoOutput = AVCapturePhotoOutput() currentSession.addOutput(currentPhotoOutput) currentPhotoOutput.isDepthDataDeliveryEnabled = true }
Topic: Graphics & Games SubTopic: RealityKit Tags:
Sep ’23
Reply to Object Capture With only manual capturing
I tried to initialize session with bellow. //MARK: Initialize Camera private func initializeCamera() { print("Initialize Camera") currentCamera = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .depthData, position: .back) currentSession = AVCaptureSession() currentSession.sessionPreset = .photo do { let cameraInput = try AVCaptureDeviceInput(device: currentCamera) currentSession.addInput(cameraInput) } catch { fatalError() } let videoOutput = AVCaptureVideoDataOutput() videoOutput.setSampleBufferDelegate(self, queue: currentDataOutputQueue) videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA] currentSession.addOutput(videoOutput) let depthOutput = AVCaptureDepthDataOutput() depthOutput.setDelegate(self, callbackQueue: currentDataOutputQueue) depthOutput.isFilteringEnabled = true currentSession.addOutput(depthOutput) currentPhotoOutput = AVCapturePhotoOutput() currentSession.addOutput(currentPhotoOutput) currentPhotoOutput.isDepthDataDeliveryEnabled = true }
Topic: Graphics & Games SubTopic: RealityKit Tags:
Replies
Boosts
Views
Activity
Sep ’23
Reply to Applying Point Cloud to updated PhotogrammetrySession
Not using RealityKit's Object Capture API, How can I manually add those datas when capturing images through AVFoundation or ARKit?
Topic: Graphics & Games SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jul ’23
Reply to PhotogrammetrySamples metaData
How did you put PhotogrammetySample to PhotogrammetrySession? As I know, PhotogrammetrySession only get image saved directory.
Topic: Graphics & Games SubTopic: RealityKit Tags:
Replies
Boosts
Views
Activity
Jul ’23
Reply to How is the photo setting of RealityKit's object capture session?
I'm asking because new object capture session's output has quite accurate in scale with real world scale. But, with Avfoundation, even I've saved photo to heic and depth to tiff, the size of reconstructed model is different with real size.
Topic: Graphics & Games SubTopic: General Tags:
Replies
Boosts
Views
Activity
Jun ’23