Hi folks,
I tried to use apple's text recognition feature by using UIAction.captureTextFromCamera(responder:identifier:). It works fine in iPhone and iPad 3rd Gen. But in iPad 4th Gen the action is not get triggered. Kindly provide assistance .
if #available(iOS 15.0, *), self.canPerformAction(#selector(captureTextFromCamera(_:)), withSender: self) {
let cameraAction = UIAction.captureTextFromCamera(responder: self, identifier: nil)
let cameraButton = UIButton(type: .custom)
cameraButton.addAction(cameraAction, for: .touchUpInside)
cameraButton.setTitle("Scan-Text", for: .normal)
view.addSubview(cameraButton)
// Then set constraints
}
_Note: The button to trigger the action is visible only the action is not get triggered.
_
Device specs:
Name: iPad Air 4th Gen
Os : iOS 16
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I tried to implement interaction to live activity.Since we don't have force update like in widget, how to update the UI (like remove or add View) After performing the intent action.
Topic:
App & System Services
SubTopic:
Notifications
Tags:
ActivityKit
wwdc2023-10185
wwdc2023-10194
Is it possible to get enabled status of app's live activity through areActivitiesEnabled inside other targets (other than main target)? seems it always returns true.
Topic:
App & System Services
SubTopic:
Notifications
Tags:
ActivityKit
wwdc2023-10185
wwdc2023-10194
I attempted to utilize the Background Assets feature for an iOS app. While debugging, I employed the following command to trigger the installation event:
xcrun backgroundassets-debug -b <bundleID> -s --app-install -d <Device ID>
This command worked flawlessly on an iPhone.
However, when I attempted to trigger the installation event on a Mac, I encountered the following error message:
The requested device to send simulation events to is not available.
Verify that the device is connected to this Mac.
Please note that the xcrun backgroundassets-debug -l command only displays a list of connected devices.Mac is not listed in that list.
To enable editing in the elevated Tab Bar or sidebar on iPadOS, we need to use UITabBar. However, using UITabBar restricts reordering in compact mode with the bottom tab bar. Instead of showing the tabs, the editing view only displays the message 'Drag the icons to organize tabs.' How can we resolve this issue?
Find demo project here
In VoiceOver, when using Group Navigation style, the cursor first focuses on the semantic group. To navigate inside the group, a two-finger swipe (left or right) can be used. This behavior works for default containers like the Navigation Bar, Tab Bar, and Tool Bar.
How can I achieve the same behavior for a custom view?
I tried setting accessibilityContainerType = .semanticGroup, but it only works for Mac Catalyst. Is there an equivalent approach for iOS?
I have some doubts about how VoiceOver handles focus when the screen updates.
When a new UIViewController is pushed onto a UINavigationController or presented modally, how does VoiceOver decide which element to focus on? Is there a way to control or customize this behavior?
In a UISplitViewController, when an item is selected in the primary view controller, the focus should shift to the relevant content in the secondary view controller. How can we ensure that VoiceOver correctly moves focus to the right element in the secondary panel?
I tried to add iOS 17 version check inside WidgetBundle but the widget extension keep on crashing .
Getting Thread 1: Swift runtime failure: Unexpectedly found nil while unwrapping an Optional value. After removing the iOS 17 check it works fine.
Is there any way to provide version check inside widgetBundle?
I'm trying to cast the error thrown by TranslationSession.translations(from:) as Translation.TranslationError. However, the app crashes at runtime whenever Translation.TranslationError is used in the project.
Environment:
iOS Version: 18.1 beta
Xcode Version: 16 beta
yld[14615]: Symbol not found: _$s11Translation0A5ErrorVMa
Referenced from: <3426152D-A738-30C1-8F06-47D2C6A1B75B> /private/var/containers/Bundle/Application/043A25BC-E53E-4B28-B71A-C21F77C0D76D/TranslationAPI.app/TranslationAPI.debug.dylib
Expected in: /System/Library/Frameworks/Translation.framework/Translation
Topic:
Machine Learning & AI
SubTopic:
Core ML
Tags:
ML Compute
Natural Language
Live Text
Apple Intelligence
All errors in TranslationError return the same error code, making it difficult to differentiate between them. How can this issue be resolved?
Topic:
Machine Learning & AI
SubTopic:
Core ML
Tags:
Swift Student Challenge
iOS
Machine Learning
Core ML
VoiceOver reads out all visible content on the screen, which is essential for visually challenged users. However, this raises a privacy concern—what if a user accidentally focuses on sensitive information, like a bank account password, and it gets read aloud?
How can developers prevent VoiceOver from exposing confidential data while still maintaining accessibility? Are there best practices or recommended approaches to handle such scenarios effectively?
I’m trying to add the .header accessibility trait to a UISegmentedControl so that VoiceOver recognizes it accordingly. However, setting the trait using the following code doesn’t seem to have any effect:
segmentControl.accessibilityTraits = segmentControl.accessibilityTraits.union(.header)
Even after applying this, VoiceOver doesn’t announce it as a header. Is there any workaround or recommended approach to achieve this?
SwiftUI provides the accessibilityCustomContent(_:_:) modifier to add additional accessibility information for an element. However, I couldn’t find a similar approach in UIKit.
Is there a way to achieve this in UIKit?
I have a view dynamically overlaid on a UITableView with proper padding (added when certain conditions are met). When VoiceOver focuses on a cell beneath this overlay, the focused element does not scroll into view. I’ve noticed similar behavior in Apple’s first-party Podcasts app.
Please find the attached image for reference. How can I resolve this issue and ensure VoiceOver scrolls the focused cell into view?
I’m trying to understand the best practice for assigning accessibilityTraits to a UITableViewCell that users can select from a list of options.
In Apple’s first-party apps like Settings, I’ve noticed an inconsistent approach—some cells use the Button trait, while others simply announce the label along with the Selected trait when applicable, without any additional role like Button or Adjustable.
So my question is:
What is the most appropriate accessibility trait to use for a selectable table view cell that updates a selection (like a settings option)?
Is using .button the right approach, or should we rely solely on .selected?
Is there any user experience guideline from Apple that recommends one over the other?
Would love to hear how others handle this for clarity and consistency in VoiceOver behavior.