Hi, I have a problem when I want to attach my grayscale depth map image into the real image. The produced depth map doesn't have the cameraCalibration value which should responsible to align the depth data to the image. How do I align the depth map? I saw an article about it but it is not really detailed so I might be missing some process.
I tried to:
convert my depth map into pixel buffer
create image destination ref and add the image there.
add the auxData (depth map dict)
This is the output:
There is some black space there and my RGB image colour changes
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
According to the docs:
The first time your app performs an operation that requires [photo library] authorization, the system automatically and asynchronously prompts the user for it.
(https://developer.apple.com/documentation/photokit/delivering-an-enhanced-privacy-experience-in-your-photos-app)
I.e. it's not necessary for the app to call PHPhotoLibrary.requestAuthorization.
This does seem to be what happens when my app runs on an iPhone or iPad; the prompt is shown. But when it runs on a Mac in "designed for iPad" mode, the permission dialog is not presented. Instead the code continues to see status == .notDetermined.
That's today, on macOS 15.3. It may have worked in the past.
Is anyone else seeing issues with this? Should I call requestAuthorization explicitly? (Would that actually work?)
There are several unknown pictures were added to my iPhone Photos. I'm developing a iOS app in swiftUI. And I'm sure that neither my developing app asked a permission to save photos to iOS device, nor these pictures are taken by myself, downloaded from the web, or synced from my iCloud. Why does this happen? Is my account or my iPhone attacked or hacked? btw, my device has turned the debug mode on, will this setting option would leads some vulnerability or some debug tools would add photos to my albums?
The attached pictures are the unknown photos in my Photos App. I'v search this photo with google. And I found that many developers in stack overflow use this picture when they encounter troubles with image display or other manipulation. Where where are this photo come from? Why does it exist in my Photo albums?
I am new here and would appreciate help in coding or an explanation what to use in swift for an app which will be able to capture LiDAR scanning and RGB data from taken pictures, generate a 3D mesh, and create .OBJ, .MTL, and .JPEG file set for further manipulation of 3D model. I am able to create from LiDAR scanning 3D mesh and .OBJ file but can not generate .MTL and .JPEG for a texture of 3D model.
I am trying to recreate the iOS Messages app photo selection UI, where a PHPickerViewController is displayed half screen, with the message text field and a scrolling photo viewer on top. I have that UI mostly working, but cannot figure out how to allow a user to remove an image from my scrolling photo viewer (just like in the iOS Messages app).
When the picker is initially displayed, I can show selected images using the preselectedAssetIdentifiers. However, if the user taps the "x" to remove an image from the scrolling photo viewer, there is no way that I have found to update that selection in the picker.
I can dismiss/show a new picker with the animated property set to false, but that creates a very apparent bounce in the screen. Are there any ways I am missing to accomplish this?
Here is what I have so far:
I want to modify the photo's exif information, which means putting the original image in through CGImageDestinationAddImageFromSource, and then adding the modified exif information to a place called properties.
Here is the complete process:
static func setImageMetadata(asset: PHAsset, exif: [String: Any]) {
let options = PHContentEditingInputRequestOptions()
options.canHandleAdjustmentData = { (adjustmentData) -> Bool in
return true
}
asset.requestContentEditingInput(with: options, completionHandler: { input, map in
guard let inputNN = input else {
return
}
guard let url = inputNN.fullSizeImageURL else {
return
}
let output = PHContentEditingOutput(contentEditingInput: inputNN)
let adjustmentData = PHAdjustmentData(formatIdentifier: AppInfo.appBundleId(), formatVersion: AppInfo.appVersion(), data: Data())
output.adjustmentData = adjustmentData
let outputURL = output.renderedContentURL
guard let source = CGImageSourceCreateWithURL(url as CFURL, nil) else {
return
}
guard let dest = CGImageDestinationCreateWithURL(outputURL as CFURL, UTType.jpeg.identifier as CFString, 1, nil) else {
return
}
CGImageDestinationAddImageFromSource(dest, source, 0, exif as CFDictionary)
let d = CGImageDestinationFinalize(dest)
// d is true, and I checked the content of outputURL, image has been write correctly, it could be convert to UIImage and image is ok.
PHPhotoLibrary.shared().performChanges {
let changeReq = PHAssetChangeRequest(for: asset)
changeReq.contentEditingOutput = output
} completionHandler: { succ, err in
if !succ {
print(err) // 3303 here, always!
}
}
})
}
I set the device format and colorspace to Apple Log and turn off the HDR, why the movie output is still in HDR format rather than ProRes Log?
Full runnable demo here:
https://github.com/SpaceGrey/ColorSpaceDemo
session.sessionPreset = .inputPriority
// get the back camera
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .video, position: .back)
backCamera = deviceDiscoverySession.devices.first!
try! backCamera.lockForConfiguration()
backCamera.automaticallyAdjustsVideoHDREnabled = false
backCamera.isVideoHDREnabled = false
let formats = backCamera.formats
let appleLogFormat = formats.first { format in
format.supportedColorSpaces.contains(.appleLog)
}
print(appleLogFormat!.supportedColorSpaces.contains(.appleLog))
backCamera.activeFormat = appleLogFormat!
backCamera.activeColorSpace = .appleLog
print("colorspace is Apple Log \(backCamera.activeColorSpace == .appleLog)")
backCamera.unlockForConfiguration()
do {
let input = try AVCaptureDeviceInput(device: backCamera)
session.addInput(input)
} catch {
print(error.localizedDescription)
}
// add output
output = AVCaptureMovieFileOutput()
session.addOutput(output)
let connection = output.connection(with: .video)!
print(
output.outputSettings(for: connection)
)
/*
["AVVideoWidthKey": 1920, "AVVideoHeightKey": 1080, "AVVideoCodecKey": apch,<----- prores has enabled.
"AVVideoCompressionPropertiesKey": {
AverageBitRate = 220029696;
ExpectedFrameRate = 30;
PrepareEncodedSampleBuffersForPaddedWrites = 1;
PrioritizeEncodingSpeedOverQuality = 0;
RealTime = 1;
}]
*/
previewSource = DefaultPreviewSource(session: session)
queue.async {
self.session.startRunning()
}
}
Hi,
I'm using Core Graphics to load a .DNG photo shot by a Leica Q3 camera.
The photo is shot in portrait, however the embedded preview is rotated 90 degrees to landscape.
I load the photo like this:
let options = [kCGImageSourceDecodeRequest: kCGImageSourceDecodeToHDR] as CFDictionary
let source = CGImageSourceCreateWithData(data as CFData, nil)
let cgimage = CGImageSourceCreateImageAtIndex(source, 0, options)
let properties = CGImageSourceCopyPropertiesAtIndex(source, 0, nil) as? [CFString : Any]
When doing this I can see that the orientation property is 1 indicating that the orientation is 'Up', which it isn't.
If I don't specify the kCGImageSourceDecodeToHDR option (eseentially setting options to nil) - the orientation property is 8 (rotated 90 degrees).
What puzzles me is that a chang to the CGImageSourceCreateImageAtIndex call can have an influence on that latter call to CGImageSourceCopyPropertiesAtIndex ?
I would expect these to work independently?
Cheers
Thomas
I’m developing a hybrid app (WebView / Turbo Native) that uses getUserMedia to access the back camera for a PPG/heart rate measurement feature (the user places their finger on the camera).
Problem: Even when I specify constraints like:
{
video: {
deviceId: '...',
facingMode: { exact: 'environment' },
advanced: [{ zoom: 1.0 }]
},
audio: false
}
On iPhone 15 (iOS 18), iOS unexpectedly switches between the wide, ultra-wide, and telephoto lenses during the measurement.
This breaks the heart rate detection, and it forces the user to move their finger in the middle of the measurement.
Question: Is there any way, via getUserMedia/WebRTC, to force iOS to use only the wide-angle lens and prevent automatic lens switching?
I know that with AVFoundation (Swift) you can pick .builtInWideAngleCamera, but I’m hoping to avoid building a custom native layer and would prefer to stick with WebView/JavaScript if possible to save time and complexity.
Any suggestions, workarounds, or updates from Apple would be greatly appreciated!
Thanks a lot!
Hey,
I have a camera app that captures a ProRaw photo and then runs a few Core Image filters before saving it to the device as a HEIC. However I'm finding that capturing at 48MP is rather slow. Testing a minimal pipeline on an iPhone 16 Pro:
Shutter press => file received in output: 1.2 ~ 1.6s
CIRawFilter created using photo file representation then rendered to context, without any filters: 0.8s ~ 1s
Saving to device ~0.15s
Is this the expected time for capturing processing? The native camera app seems to save the images within half a second. I'm using QualityPrioritization.balanced and the highest resolution available which is 48MP.
Would using the CIRawFilter with the pixelBuffer from the photo output be faster? I tried it but couldn't get it to output an image. Are there any other things I could try to speed this up? Is it possible to capture at 24MP instead?
Thanks,
Alex
I want to create a Live Photo. The project includes a .jpg image and a .mov video (2 seconds). I am sure they are correct.
Two permissions in xcode have been added:
Privacy - Photo Library Usage Description
Privacy - Photo Library Additions Usage Description Simulate: iphone 16, ios 18.3
The codes in ContentView.swift :
private func saveLivePhoto(imageURL: URL, videoURL: URL, completion: @escaping (Bool, Error?) -> Void) {
PHPhotoLibrary.shared().performChanges {
let creationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
options.shouldMoveFile = false
creationRequest.addResource(with: .photo, fileURL: imageURL, options: options)
creationRequest.addResource(with: .pairedVideo, fileURL: videoURL, options: options)
} completionHandler: { success, error in
DispatchQueue.main.async {
print(error)
completion(success, error)
}
}
}
guard let imageURL = Bundle.main.url(forResource: "livephoto", withExtension: "jpeg"),
let videoURL = Bundle.main.url(forResource: "livephoto", withExtension: "mov") else {
showAlertMessage(title: "error", message: "cant find Live Photo ")
return
}
print("imageURL: \(imageURL)")
print("videoURL: \(videoURL)")
saveLivePhoto(imageURL: imageURL, videoURL: videoURL) { success, error in
if success {
xxxxx
} else {
xxxxx
}
}
Really need help, thanks
If I want to edit image in preview app. But there is only option to rotate left and right 90degree rotations. No option to rorate in any prticular angle. So Please look into this and provide option in next update
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Graphics and Games
App Review
Media
Hi all,
I'm using Apple Sample Code below to create application using dockkit.
"Controlling a DockKit accessory using your camera app"
https://developer.apple.com/documentation/dockkit/controlling-a-dockkit-accessory-using-your-camera-app?changes=_8
I used vision hand recognition and put the observation data to dockAccessory.track, but Belkin or Insta360 devices never move on iPhone 16 Pro Max with iOS 18.3.
If I use other functions like face search (system tracking) in the app, those work ok.
I used Belkin and Insta360 Flow 2 Pro to reproduce the problem.
My friend is also saying that the custom tracking feature was working fine on the OS 18 beta, but on recent iOS 18.3 that feature does not work.
If I can get the iOS 18.0 beta then we can test that feature. But I cannot revert my iOS from 18.3 to the iOS 18.0 Beta.
Regards,
TO
On some devices, loadFileRepresentation(forTypeIdentifier: completionHandler) take a loong time(about two minute) to callback result for some large video(about 200 MB, take by device camera).
environment:
Model: iPhone 12
Model Number: MGGM3CH/A
iOS Version: 18.3.2
PHPickerResult.NSItemProvider.loadFileRepresentation()
// import PhotosUI
func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) {
picker.dismiss(animated: true, completion: nil)
guard let provider = results.last?.itemProvider else { return }
guard provider.hasItemConformingToTypeIdentifier(UTType.movie.identifier) else {
return
}
Task {
provider.loadFileRepresentation(forTypeIdentifier: UTType.movie.identifier) { url, error in
guard let url = url else {
return
}
// Do some stuff...
}
}
}
ps: I also try some other function, eg: provide.loadItem(forTypeIdentifier:), but not work too.
On some devices, when i select the same media multiple times, the data by` loadFileRepresentation(forTypeIdentifier: completionHandler) ` returned is different(data.count is not equal).
environment:
* Model: iPhone 12
* Model Number: MGGM3CH/A
* iOS Version: 18.3.2
```Swift
// import PhotosUI
func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) {
picker.dismiss(animated: true, completion: nil)
guard let provider = results.last?.itemProvider else { return }
guard provider.hasItemConformingToTypeIdentifier(UTType.movie.identifier) else {
return
}
Task {
provider.loadFileRepresentation(forTypeIdentifier: UTType.movie.identifier) { url, error in
guard let url = url else {
return
}
if let data = try? Data(contentsOf: url) {
print("data count is: \(data.count)")
}
}
}
}
```
ps: I also try some other function, eg: ` provide.loadItem(forTypeIdentifier:)`, but not work too.
I'm developing a tennis ball tracking feature using Vision Framework in Swift, specifically utilizing VNDetectedObjectObservation and VNTrackObjectRequest.
Occasionally (but not always), I receive the following runtime error:
Failed to perform SequenceRequest: Error Domain=com.apple.Vision Code=9 "Internal error: unexpected tracked object bounding box size" UserInfo={NSLocalizedDescription=Internal error: unexpected tracked object bounding box size}
From my investigation, I suspect the issue arises when the bounding box from the initial observation (VNDetectedObjectObservation) is too small. However, Apple's documentation doesn't clearly define the minimum bounding box size that's considered valid by VNTrackObjectRequest.
Could someone clarify:
What is the minimum acceptable bounding box width and height (normalized) that Vision Framework's VNTrackObjectRequest expects?
Is there any recommended practice or official guidance for bounding box size validation before creating a tracking request?
This information would be extremely helpful to reliably avoid this internal error.
Thank you!
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
ML Compute
Machine Learning
Camera
AVFoundation
I am developing a video streaming app for iPhone.
Minimum version is IOS 13.
I want to connect an external USB camera to the iPhone app and stream from it.
I have looked through a lot of information and have not found how to do this.
Is it possible to do this? Is there any documentation on this?
Hi,
Could anybody share the full list of supported Predicate keys for the PHFetchOptions?
I'm aware of the list that is posted in the documentation:
https://developer.apple.com/documentation/photos/phfetchoptions
However I have reason to believe that this is not an exhaustive list and there also seem to be mistakes in this doc. i.e. isFavorite does not work but favorite does.
Through some experimentation I also found that this works:
NSPredicate(format: "adjustmentFormatIdentifier == 'com.pixelmatorteam.touch.x.photo.PhotosAdjustmentData.EmbeddedSlimSidecarFileInfo.compressed'")
even though adjustmentFormatIdentifier is not listed as a supported key.
Are there other secret keys that you are aware of?
Specifically I want to filter a fetch result for edited items. Something like this:
NSPredicate(format: "hasAdjustments == true")
(I tried this, doesn't work)
The native Photos app has such a filter which leads me to believe that there probably is a key for this:
If one of the Framework Developers reads this: Could you please update the documentation page with this information?
Finally if there really aren't any more secret keys, is there a way to achieve this with adjustmentFormatIdentifier? I have tried a bunch of stuff already like adjustmentFormatIdentifier != nil but for some reason that gives me the exact opposite of what I want: all the photos without edits. 🫠
Any tips on the correct syntax here would be much appreciated.
Hi all,
In MacOS, how can I disable or enable build-in camera by program or script?
A functioning Multiplatform app, which includes use of Continuity Camera on an M1MacMini running Sequoia 15.5, works correctly capturing photos with AVCapturePhoto. However, that app (and a test app just for Continuity Camera) crashes at delegate callback when run on a 2017 MacBookPro under MacOS 13.7.5. The app was created with Xcode 16 (various releases) and using Swift 6 (but tried with 5). Compiling and running the test app with Xcode 15.2 on the 13.7.5 machine also crashes at delegate callback.
The iPhone 15 Continuity Camera gets detected and set up correctly, and preview video works correctly. It's when the CapturePhoto code is run that the crash occurs.
The relevant capture code is:
func capturePhoto() {
let captureSettings = AVCapturePhotoSettings()
captureSettings.flashMode = .auto
photoOutput.maxPhotoQualityPrioritization = .quality
photoOutput.capturePhoto(with: captureSettings, delegate: PhotoDelegate.shared)
print("**** CameraManager: capturePhoto")
}
and the delegate callbacks are:
class PhotoDelegate: NSObject, AVCapturePhotoCaptureDelegate {
nonisolated(unsafe) static let shared = PhotoDelegate()
// MARK: - Delegate callbacks
func photoOutput(
_ output: AVCapturePhotoOutput,
didFinishProcessingPhoto photo: AVCapturePhoto,
error: (any Error)?
) {
print("**** CameraManager: didFinishProcessingPhoto")
guard let pData = photo.fileDataRepresentation() else {
print("**** photoOutput is empty")
return
}
print("**** photoOutput data is \(pData.count) bytes")
}
func photoOutput(
_ output: AVCapturePhotoOutput,
willBeginCaptureFor resolvedSettings: AVCaptureResolvedPhotoSettings
) {
print("**** CameraManager: willBeginCaptureFor")
}
func photoOutput(_ output: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) {
print("**** CameraManager: willCaptureCapturePhotoFor")
}
}
The crash report significant parts are.....
Crashed Thread: 3 Dispatch queue: com.apple.cmio.CMIOExtensionProviderHostContext
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000000
Exception Codes: 0x0000000000000001, 0x0000000000000000
Termination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11
Terminating Process: exc handler [30850]
VM Region Info: 0 is not in any region. Bytes before following region: 4296495104
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
UNUSED SPACE AT START
--->
__TEXT 100175000-10017f000 [ 40K] r-x/r-x SM=COW ...tinuityCamera
Thread 0:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x7ff803aed552 mach_msg2_trap + 10
1 libsystem_kernel.dylib 0x7ff803afb6cd mach_msg2_internal + 78
2 libsystem_kernel.dylib 0x7ff803af4584 mach_msg_overwrite + 692
3 libsystem_kernel.dylib 0x7ff803aed83a mach_msg + 19
4 CoreFoundation 0x7ff803c07f8f __CFRunLoopServiceMachPort + 145
5 CoreFoundation 0x7ff803c06a10 __CFRunLoopRun + 1365
6 CoreFoundation 0x7ff803c05e51 CFRunLoopRunSpecific + 560
7 HIToolbox 0x7ff80d694f3d RunCurrentEventLoopInMode + 292
8 HIToolbox 0x7ff80d694d4e ReceiveNextEventCommon + 657
9 HIToolbox 0x7ff80d694aa8 _BlockUntilNextEventMatchingListInModeWithFilter + 64
10 AppKit 0x7ff806ca59d8 _DPSNextEvent + 858
11 AppKit 0x7ff806ca4882 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1214
12 AppKit 0x7ff806c96ef7 -[NSApplication run] + 586
13 AppKit 0x7ff806c6b111 NSApplicationMain + 817
14 SwiftUI 0x7ff90e03a9fb 0x7ff90dfb4000 + 551419
15 SwiftUI 0x7ff90f0778b4 0x7ff90dfb4000 + 17578164
16 SwiftUI 0x7ff90e9906cf 0x7ff90dfb4000 + 10340047
17 ContinuityCamera 0x10017b49e 0x100175000 + 25758
18 dyld 0x7ff8037d1418 start + 1896
Thread 1:
0 libsystem_pthread.dylib 0x7ff803b27bb0 start_wqthread + 0
Thread 2:
0 libsystem_pthread.dylib 0x7ff803b27bb0 start_wqthread + 0
Thread 3 Crashed:: Dispatch queue: com.apple.cmio.CMIOExtensionProviderHostContext
0 ??? 0x0 ???
1 AVFCapture 0x7ff82045996c StreamAsyncStillCaptureCallback + 61
2 CoreMediaIO 0x7ff813a4358f __94-[CMIOExtensionProviderHostContext captureAsyncStillImageWithStreamID:uniqueID:options:reply:]_block_invoke + 498
3 libxpc.dylib 0x7ff803875b33 _xpc_connection_reply_callout + 36
4 libxpc.dylib 0x7ff803875ab2 _xpc_connection_call_reply_async + 69
5 libdispatch.dylib 0x7ff80398b099 _dispatch_client_callout3 + 8
6 libdispatch.dylib 0x7ff8039a6795 _dispatch_mach_msg_async_reply_invoke + 387
7 libdispatch.dylib 0x7ff803991088 _dispatch_lane_serial_drain + 393
8 libdispatch.dylib 0x7ff803991d6c _dispatch_lane_invoke + 417
9 libdispatch.dylib 0x7ff80399c3fc _dispatch_workloop_worker_thread + 765
10 libsystem_pthread.dylib 0x7ff803b28c55 _pthread_wqthread + 327
11 libsystem_pthread.dylib 0x7ff803b27bbf start_wqthread + 15
Of course, the MacBookPro is an old device - but Continuity Camera works with the installed Photo Booth app, so it's possible.
Any thoughts on solving this situation would be appreciated.
Regards, Michaela