Post

Replies

Boosts

Views

Activity

Reply to SwiftUI FormView not updating after value creation/updating in a SubView
UPDATE: The problem with this sample (Test) code was with the way I passed the CoreData Entity to the Form. My in-development app updates the NSManaged vars correctly and recomputes the derived data, but there's a problem (yet to be resolved) in getting all those back into the Form for display, something to do with the way I've generalised the Form's processing. Apologies for any inconvenience. Regards, Michaela
Topic: UI Frameworks SubTopic: SwiftUI Tags:
Nov ’24
Reply to Heic format image incompatibility issue
I don't know if it's related, but in a test project I'm using to try out text recognition from photos Xcode does not recognise .HEIC as a valid extension for images in the Asset catalogue. Changing (in Finder's Get Info) the extension to .heic (i.e. lowercase) solves the issue. A device/app interoperability issue methinks..... Images from my iPhone and iPad all have the .HEIC extension, which (of course) doesn't get changed to lowercase on Airdrop to my Mac. I'm using Xcode 16.2 Beta 2 with iOS devices on 18.2 and Mac on 15.2. Regards, Michaela
Nov ’24
Reply to New Vision API
I've just done a prototype app for recognising text from photos of product labels, using the new Vision API with Xcode 16.2 beta 2 and images from an iPad Pro (2020 M1 chip) and iPhone 15 Pro. The recognition is very accurate, even correctly recognising neatly handwritten label text. These are the settings I'm using: ocrRequest.recognitionLevel = .accurate ocrRequest.usesLanguageCorrection = true ocrRequest.automaticallyDetectsLanguage = true Some of the labels are very small (1cm x 2cm), but even then with the iPhone camera on x2 or x3 and macro-mode the recognition is near perfect. Regards, Michaela
Topic: Machine Learning & AI SubTopic: General Tags:
Nov ’24
Reply to Macro-mode in AVCaptureDevice(custom camera)
I too need this capability, for photographing product labels and OCR text recognition. The small size (and font) of the labels requires macro function. The Camera App on my iPhone 15 Pro (iOS 18.3) automatically enters Macro mode and creates very sharp Label images, which OCR accurately. However, within my iOS Swift app with AVCapture there appears to be no way of enabling this behaviour: close focus is not possible and therefore Labels do not OCR properly.
Jan ’25
Reply to Macro-mode in AVCaptureDevice(custom camera)
I solved the problem by explicitly setting the device, as per the below function: func getMacroDevice() -> AVCaptureDevice? { if let device = AVCaptureDevice.default(.builtInTripleCamera, for: .video, position: .back) { return device } if let device = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .back) { return device } else { if let device = AVCaptureDevice.default(for: .video) { return device } else { return nil } } } I use this function in the session.beginConfiguration() part of the camera functionality to set the device. The order is important, because on my iPhone 15 Pro if the Dual Camera is the first choice then macro mode doesn't occur. Also, on my M1 iPad Pro, builtInDouble and builtInTriple don't apply (the let fails): the iPad will close focus using the default camera device. It seems to me that if the default behaviour is to automatically switch, then in my development environment and on my devices the supposed behaviour is not working. Regards, Michaela
Jan ’25
Reply to Macro-mode in AVCaptureDevice(custom camera)
Further to Greg's reply to my solution, I confirm that builtInDualCamera does not provide close-focus, whereas builtInDualWideCamera does. I've therefore revised my solution (to support different iPhone / iPad models) as per below: func getMacroDevice() -> AVCaptureDevice? { if let device = AVCaptureDevice.default(.builtInTripleCamera, for: .video, position: .back) { return device } if let device = AVCaptureDevice.default(.builtInDualWideCamera,for: .video, position: .back) { return device } if let device = AVCaptureDevice.default(for: .video) { return device } return nil } I'm still puzzled by the "default behaviour" not working and will explore further at https://developer.apple.com/documentation/avfoundation/avcapturedevice/activeprimaryconstituentdeviceswitchingbehavior UPDATE: using device = AVCaptureDevice.default(for: .video) and then getting that device's .primaryConstituentDeviceSwitchingBehavior returns .unsupported for my iPhone 15 Pro under iOS 18.3. So, it seems that our code must specify a primary constituent device, as per my code above, otherwise the default device does not qualify for switching, if available. Perhaps this is what Greg was getting at with his original answer. Also, the M1 iPad Pro has a .builtInDualWideCamera, which is probably its default. Regards, Michaela
Jan ’25
Reply to Macro-mode in AVCaptureDevice(custom camera)
Close focussing requires the use of a virtual device that has some form of wide-angle lens, especially ultra-wide. However, for my application needs (small label with small fonts) this can mean that the camera is very close to the subject, thereby blocking illumination - resulting in poorer OCR results. My solution to this is to use the device's Video Zoom Factor capability (if available), which is a digital zoom feature rather than optical (i.e. it's based on cropping the sensor output) The below code sample is used in the Photo Preview UI, with reference to the class that manages the camera set up and delegates ("camera") and to the available device determined by that class instance. Of course, zooming (cropping) too much can also cause OCR results to deteriorate: so it's a case of trial and error. There is a videoMaxZoomFactor read-only property for applicable device formats. Slider( value:$zoomFactor, in: 1.0...4.0, step: 0.5 ) { Text("Zoom") } minimumValueLabel: { Text("1x") } maximumValueLabel: { Text("4x") } onEditingChanged: { _ in do { try camera.device?.lockForConfiguration() camera.device?.ramp(toVideoZoomFactor: self.zoomFactor,withRate:1.0) camera.device?.unlockForConfiguration() } catch { print("**** Unable to set camera zoom") } } Regards, Michaela
Feb ’25
Reply to App Store submission validation failed: Missing info.plist value WKApplication
Thanks Ed, but I've just solved the problem. I'd inadvertently saved an archive from this App's build into the App's source directory. On deleting that extraneous archive, Validation then complained about Background Processing plist key (the app uses CoreData - CloudKit sync). Adding a plist entry fixed that error and now the app has passed validation and been uploaded - but doesn't yet appear in my iOS apps (for TestFlight testing).
Jan ’26
Reply to SwiftUI FormView not updating after value creation/updating in a SubView
UPDATE: The problem with this sample (Test) code was with the way I passed the CoreData Entity to the Form. My in-development app updates the NSManaged vars correctly and recomputes the derived data, but there's a problem (yet to be resolved) in getting all those back into the Form for display, something to do with the way I've generalised the Form's processing. Apologies for any inconvenience. Regards, Michaela
Topic: UI Frameworks SubTopic: SwiftUI Tags:
Replies
Boosts
Views
Activity
Nov ’24
Reply to Heic format image incompatibility issue
I don't know if it's related, but in a test project I'm using to try out text recognition from photos Xcode does not recognise .HEIC as a valid extension for images in the Asset catalogue. Changing (in Finder's Get Info) the extension to .heic (i.e. lowercase) solves the issue. A device/app interoperability issue methinks..... Images from my iPhone and iPad all have the .HEIC extension, which (of course) doesn't get changed to lowercase on Airdrop to my Mac. I'm using Xcode 16.2 Beta 2 with iOS devices on 18.2 and Mac on 15.2. Regards, Michaela
Replies
Boosts
Views
Activity
Nov ’24
Reply to New Vision API
I've just done a prototype app for recognising text from photos of product labels, using the new Vision API with Xcode 16.2 beta 2 and images from an iPad Pro (2020 M1 chip) and iPhone 15 Pro. The recognition is very accurate, even correctly recognising neatly handwritten label text. These are the settings I'm using: ocrRequest.recognitionLevel = .accurate ocrRequest.usesLanguageCorrection = true ocrRequest.automaticallyDetectsLanguage = true Some of the labels are very small (1cm x 2cm), but even then with the iPhone camera on x2 or x3 and macro-mode the recognition is near perfect. Regards, Michaela
Topic: Machine Learning & AI SubTopic: General Tags:
Replies
Boosts
Views
Activity
Nov ’24
Reply to Macro-mode in AVCaptureDevice(custom camera)
I too need this capability, for photographing product labels and OCR text recognition. The small size (and font) of the labels requires macro function. The Camera App on my iPhone 15 Pro (iOS 18.3) automatically enters Macro mode and creates very sharp Label images, which OCR accurately. However, within my iOS Swift app with AVCapture there appears to be no way of enabling this behaviour: close focus is not possible and therefore Labels do not OCR properly.
Replies
Boosts
Views
Activity
Jan ’25
Reply to Macro-mode in AVCaptureDevice(custom camera)
I solved the problem by explicitly setting the device, as per the below function: func getMacroDevice() -> AVCaptureDevice? { if let device = AVCaptureDevice.default(.builtInTripleCamera, for: .video, position: .back) { return device } if let device = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .back) { return device } else { if let device = AVCaptureDevice.default(for: .video) { return device } else { return nil } } } I use this function in the session.beginConfiguration() part of the camera functionality to set the device. The order is important, because on my iPhone 15 Pro if the Dual Camera is the first choice then macro mode doesn't occur. Also, on my M1 iPad Pro, builtInDouble and builtInTriple don't apply (the let fails): the iPad will close focus using the default camera device. It seems to me that if the default behaviour is to automatically switch, then in my development environment and on my devices the supposed behaviour is not working. Regards, Michaela
Replies
Boosts
Views
Activity
Jan ’25
Reply to Macro-mode in AVCaptureDevice(custom camera)
Further to Greg's reply to my solution, I confirm that builtInDualCamera does not provide close-focus, whereas builtInDualWideCamera does. I've therefore revised my solution (to support different iPhone / iPad models) as per below: func getMacroDevice() -> AVCaptureDevice? { if let device = AVCaptureDevice.default(.builtInTripleCamera, for: .video, position: .back) { return device } if let device = AVCaptureDevice.default(.builtInDualWideCamera,for: .video, position: .back) { return device } if let device = AVCaptureDevice.default(for: .video) { return device } return nil } I'm still puzzled by the "default behaviour" not working and will explore further at https://developer.apple.com/documentation/avfoundation/avcapturedevice/activeprimaryconstituentdeviceswitchingbehavior UPDATE: using device = AVCaptureDevice.default(for: .video) and then getting that device's .primaryConstituentDeviceSwitchingBehavior returns .unsupported for my iPhone 15 Pro under iOS 18.3. So, it seems that our code must specify a primary constituent device, as per my code above, otherwise the default device does not qualify for switching, if available. Perhaps this is what Greg was getting at with his original answer. Also, the M1 iPad Pro has a .builtInDualWideCamera, which is probably its default. Regards, Michaela
Replies
Boosts
Views
Activity
Jan ’25
Reply to Macro-mode in AVCaptureDevice(custom camera)
Close focussing requires the use of a virtual device that has some form of wide-angle lens, especially ultra-wide. However, for my application needs (small label with small fonts) this can mean that the camera is very close to the subject, thereby blocking illumination - resulting in poorer OCR results. My solution to this is to use the device's Video Zoom Factor capability (if available), which is a digital zoom feature rather than optical (i.e. it's based on cropping the sensor output) The below code sample is used in the Photo Preview UI, with reference to the class that manages the camera set up and delegates ("camera") and to the available device determined by that class instance. Of course, zooming (cropping) too much can also cause OCR results to deteriorate: so it's a case of trial and error. There is a videoMaxZoomFactor read-only property for applicable device formats. Slider( value:$zoomFactor, in: 1.0...4.0, step: 0.5 ) { Text("Zoom") } minimumValueLabel: { Text("1x") } maximumValueLabel: { Text("4x") } onEditingChanged: { _ in do { try camera.device?.lockForConfiguration() camera.device?.ramp(toVideoZoomFactor: self.zoomFactor,withRate:1.0) camera.device?.unlockForConfiguration() } catch { print("**** Unable to set camera zoom") } } Regards, Michaela
Replies
Boosts
Views
Activity
Feb ’25
Reply to Xcode 16.3 beta Predictive Code Completion not working
There is no code completion, predictive or otherwise, when typing code in Xcode: it's as though the feature has been completely turned off.
Replies
Boosts
Views
Activity
Feb ’25
Reply to Xcode 16.3 beta Predictive Code Completion not working
I'd tried turning off / back-on code completion etc in Xcode settings, then reloading the models - still didn't work. But restarting the Mac thereafter did, so thanks Ed. I should have thought of that. Regards, Michaela
Replies
Boosts
Views
Activity
Feb ’25
Reply to App Store submission validation failed: Missing info.plist value WKApplication
Thanks Ed, but I've just solved the problem. I'd inadvertently saved an archive from this App's build into the App's source directory. On deleting that extraneous archive, Validation then complained about Background Processing plist key (the app uses CoreData - CloudKit sync). Adding a plist entry fixed that error and now the app has passed validation and been uploaded - but doesn't yet appear in my iOS apps (for TestFlight testing).
Replies
Boosts
Views
Activity
Jan ’26