Explore best practices for creating inclusive apps for users of Apple accessibility features and users from diverse backgrounds.

All subtopics
Posts under Accessibility & Inclusion topic

Post

Replies

Boosts

Views

Activity

Clarification on Color Path Determination in Wallet Provisioning (Green,Yellow, Orange) Path recommendation
Hi, I’ve been reviewing the Apple Wallet provisioning documentation (Getting Started with Apple Pay In-App Provisioning_ Verification_Security_Wallet Extensions )and had a few questions regarding the color path recommendation (Green, Yellow, Orange, Red) returned during the in-app provisioning flow: Who determines the color path—is it Apple directly, the Payment Network Operator (PNO), or both? What criteria are used to determine the color path (e.g., device info, Apple ID reputation, past provisioning attempts)? At what point in the provisioning flow is the color path recommendation received? Is it included in the response after the PKAddPaymentPassRequest is submitted? Is it accessible through any specific property or callback in the delegate method? Additionally, for Orange Path with Reason Code 0G, I understand that in-app verification is not allowed and must be handled via tenured channels (e.g., SMS/email). Can you confirm if this logic still applies for requests initiated from within the issuer's iOS app? Would appreciate any clarification or pointers to related documentation.
0
0
109
May ’25
False 3.1.1 Rejection: Real-World Dues Payments App
Hello everyone, Our community dues payment app only facilitates real-world maintenance-dues payments directly to property managers’ bank accounts. However, during testing it was likely flagged by the AI-driven review system for a metadata criterion and rejected under Guideline 3.1.1 (“Paid digital content must use IAP”). Meanwhile, hundreds of similar apps remain live on the App Store using the exact same model: The app is completely free No digital content or subscriptions are sold Dues payments are made via bank transfer or credit card directly to the manager Has anyone else encountered this? How did you overcome the metadata check in the AI-driven review process? Thanks!
0
0
102
May ’25
Defining boundaries of inline dialogs for VO users
Hello, I had submitted a question to clarify which components have accessibility APIs that trigger haptics for VoiceOver users https://developer.apple.com/forums/thread/773182. The question stems from perhaps a more direct question about specific components: do tablists and disclosures natively intend to include haptics or screen reader hint or other state or properties to indicate to screen reader users where the component begins or ends? In some web experiences there are screen reader hint text stating "end of..." or "entering" as a way to define the boundaries of these inline dialogs. I had asked about haptics in the prior thread because I do not recall natively implemented version of this except in some haptic cues but have not experienced them consistently so I am not sure if that is an intended native Swift implementation or perhaps something custom.
0
0
110
May ’25
Seeking API Support for Marking Substrings as Headings in NSTextView for VoiceOver
I'm developing a document editor for macOS using AppKit, which supports structured content such as titles and multiple heading levels—similar to what you see in the Pages app. I'm looking for a way to programmatically mark a specific substring within an NSTextView as a heading, so that VoiceOver can recognize it and announce it appropriately (e.g., by saying “heading” before reading the text). This would be similar in spirit to how NSAccessibilityLinkTextAttribute works for links. Is there an existing accessibility text attribute or recommended approach to achieve this behavior for headings? If not, I’d appreciate any guidance or suggestions on how best to implement this in a VoiceOver-friendly way. Thank you in advance for your help! Best regards,
0
0
100
May ’25
Focus issues with ScrollView iOS18
When using an app via external keyboard, FocusState and .focused used to work just fine until iOS17. Vertical-axis textfields were also accessible without any issues. But after iOS18 update, adding focused modifier removes elements out of focus order of external keyboard. 1 such example is -when a button using focused modifier and @FocusSate is inside a ScrollView and if this view is getting opened via NavigationLink, that button is not accessible via Bluetooth (external) keyboard. TextEditor / Vertical-axis TextFields also seem to be impacted in external-keyboard-focus-order when added inside ScrollView. Is this a known iOS18 issue with ScrollView / any tip to get this fixed ? Sample code that can reproduce this issue: struct ContentView: View { @State private var showBottomSheet: Bool = false @State private var goToNextView: Bool = false @FocusState private var focused: Bool @AccessibilityFocusState private var voFocused: Bool var body: some View { NavigationView { VStack { Text("Hello, world!") // This button works fine in Bluetooth keyboard in all versions Button("Trigger a bottomsheet") { showBottomSheet = true } .focused($focused) .accessibilityFocused($voFocused) Button("Goto another view") { goToNextView = true } NavigationLink( destination: View2(), isActive: $goToNextView ) { EmptyView() } .accessibility(hidden: true) } .sheet(isPresented: $showBottomSheet, onDismiss: { focused = true voFocused = true }, content: { VStack() { Text("Hello World ! I'm in a bottomsheet") Button("Close me") { showBottomSheet = false } } }) .padding() } } } #Preview { ContentView() } struct View2: View { @FocusState private var focused: Bool @AccessibilityFocusState private var voFocused: Bool @State private var showBottomSheet: Bool = false var body: some View { ScrollView { VStack { Text("check") // In iOS18, this button doesn't get focused in Bluetooth / external keyboard // This issue occurs when these 3 combine in iOS 18 - a button using FocusState inside a view that has a ScrollView & it is opened via NavigationLink Button("Trigger a bottomsheet") { showBottomSheet = true } .focused($focused) .accessibilityFocused($voFocused) Button("Test button") { } } .sheet(isPresented: $showBottomSheet, onDismiss: { focused = true voFocused = true }, content: { VStack() { Text("Hello World ! I'm in a bottomsheet") Button("Close me") { showBottomSheet = false } } }) .padding() } } }
0
1
583
Feb ’25
Using WebSocket for BCI Click Input in VisionOS - FocusState vs. System-Level Limitations
Hi everyone, My team and I are developing an accessibility-focused VisionOS app (MindTap) as part of a university project, aiming to support individuals with Locked-In Syndrome using Brain-Computer Interface (BCI) signals to trigger interactions (e.g., tapping) within the Apple Vision Pro environment. Problem 1: Simulating Eye Tracking in Simulator We are testing onHover with Send pointer to the device under I/O > Input in the simulator, and while it mostly works (a bit laggy), we found that onHover won't function on the actual Vision Pro hardware. From what I understand, we should be using FocusState for proper gaze interaction, but testing this requires the physical device. Is there any workaround or official Apple-recommended way to simulate Focus-based gaze detection without a real Vision Pro? Problem 2: WebSocket-triggered "Click" doesn't work outside the app We successfully use WebSocket to send a custom signal (a "1" from the brain signal device) to trigger an action inside our app. However, when the user opens a third-party app like Apple News, the WebSocket-triggered "click" no longer works. We suspect this is due to sandbox restrictions or lack of system-level permissions. Is it possible in anyway to: Trigger interaction events outside the app using custom input (like BCI via Websocket)? Access system-wide click/tap simulation APIs from within VisionOS apps Integrate this with accessibility services (like Voice Control or AssistiveTouch) We'd appreciate any official guidance or tips from others building similar accessibility apps with alternative input methods in VisionOS. Thanks in advance for any insight you can provide!
0
0
109
Apr ’25
MAS restrictions on file read-write for desktop electron apps
We have an electron app developed for Mac. We would like to restore the user data previously saved in downloads once user installs the app from store and first launch. But MAS has restrictions with ""com.apple.security.files.downloads.read-write". We have enabled the user access in Entitlement files and request user permission before access What options can be user to auto restore the data from downlodas?
0
0
93
Apr ’25
Why does the macOS window sharing indicator appear for some windows but not others?
On recent versions of macOS, when a window is being shared (via the system screen-capture APIs), the OS sometimes shows a small "shared window" badge in the title bar. I’ve noticed that this indicator is not consistent: For some windows, the badge reliably appears when they are being shared. For other windows, the badge never appears, even though the window is actively shared. In particular, windows that use a standard system title bar seem to show the indicator more often, while windows with custom-drawn or non-standard chrome do not. My questions are: What are the exact conditions under which macOS decides to draw the “shared window” indicator in a window’s title bar? Is this strictly tied to certain NSWindow styles or masks (e.g. titled vs borderless)? Is there any API or flag I can use to detect programmatically whether a given window will display this system indicator when shared?
0
0
1.1k
Sep ’25
Custom prediction panel not working in Google Docs
I’m working on a macOS Accessibility setup for a French-speaking user and I’ve hit a wall. (I'm not a developper and I'm trying to help my kid with dyslexia) I successfully built a custom word prediction panel using the Panel Editor (Keyboard) in macOS Accessibility > Keyboard > Accessibility Keyboard. Here’s what I have so far: • The prediction panel works system-wide: I can use it to type in Finder, Safari, Notes, TextEdit, and even browser search bars. • The panel appears above all applications and suggestions show up correctly. • However, it does not work inside Google Docs (tested in Chrome, Safari, and Firefox). Selecting a word from the panel does nothing in the Docs editor. I suspect this is because: • Google Docs does not use a standard macOS text input field. • Docs is a web app that relies on custom JavaScript editors, contentEditable elements, and canvas rendering, so macOS Accessibility APIs (AXTextField, AXInsertText, etc.) don’t register or inject text events. • Accessibility tools like the Accessibility Keyboard rely on native macOS text input methods, which don’t hook into Google Docs’ custom editor. Important: I’m not a programmer. I’d like to know if there is an easy fix or option in macOS, Google Chrome, or Google Docs that would make my custom prediction panel work, before going into custom development. Technical setup: • MacBook Air (M2, 2022) • RAM: 8 GB • macOS: Sequoia 15.3.1 • Language: French (system and keyboard) • Accessibility Keyboard: Enabled via Settings > Accessibility > Keyboard • Custom panel: Built using Panel Editor (Keyboard), named “Philemon Prédiction” • Browsers tested: Chrome, Safari, Firefox (same issue) • Behavior: Panel is visible, suggestions appear, but inserting text does nothing in Google Docs Has anyone worked around this limitation? Is there a simple setting, workaround, or accessibility option to bridge macOS Accessibility input with Google Docs’ editor? Thanks a lot!
0
1
933
Aug ’25
iOS 26 regression: `DeviceActivityEvent`: `eventDidReachThreshold` called immediately (instead of waiting till threshold is reached)
Hello Albert! I am experiencing some strange bugs around DeviceActivityEvents (part of the DeviceActivity framework) on iOS 26 / iOS 26.1 / iOS 26.2 beta: When creating a DeviceActivityEvent we can assign a threshold and applicationTokens. The idea is, that after the user has spent said threshold on said apps, eventDidReachThreshold() is called. The property includesPastActivity is set to false. On iOS 26 however, it happens (quite reliably after updating to a new beta seed) quite often that eventDidReachThreshold() is called immediately (after a couple of seconds) instead of waiting for the threshold to be met. Is anyone else seeing similar issues on iOS 26 / iOS 26.1 / iOS 26.2 beta? Only workaround I have found is to ask users to revoke and re-grant Screen Time permissions. This only holds for about two weeks though or at most until the next iOS 26 beta update is installed, so it is not a permanent solution unfortunately. Feedback (incl. sysdiagnoses and sample project) is filed under: FB18061981 FB18927456 One of our users has filed their own feedback request as well: FB20817853 Thanks a lot for any help on this!
0
0
288
2w
App Store Connect – “Unable to Handle This Request” Error
Hello, I'm currently unable to access App Store Connect. When I try to open https://appstoreconnect.apple.com, I receive the following error message: “appstoreconnect.apple.com is currently unable to handle this request.” I’ve tried the following steps, but the issue persists: Cleared browser cache and cookies Tried different browsers (Safari, Chrome) Attempted from multiple devices and networks Is this a known issue or is there any workaround available? Would appreciate any help or update on the current status. Thank you,
0
0
103
Jun ’25
The accessibility app keeps opening by itself on my iPhone 12 Pro. Can anyone please help me?
It’s very annoying but on my iPhone 12 Pro I keep getting the accessibility app with the microphone on and it keeps opening the app by itself and it’s a blank screen and every time I close it it just reopens. I don’t know why it keeps doing this, but it drives me crazy. Does anyone know what else to do? I also have the beta iOS 26 but it’s been doing this even with the past update.
0
0
105
Jun ’25
Default Voices for AVSpeechUtterance
It appears iOS only comes with low quality voices installed. iOS requires the user to go into settings to download higher quality voices to be used with AVSpeechUtterance. There doesn't seem to be any api that can be used to make this process easier for the app user. Is there a way / api that would allow an app to download and use a higher quality voice? Will apple ever install on default higher quality voices? We really want to use the text to speech api in iOS however the very high amount of user friction to use high quality voices is stopping us. I would appreciate a response. Thanks
0
0
708
Sep ’25
Handling VoiceOver Focus When Screen Changes (Push, Present, and SplitViewController)
I have some doubts about how VoiceOver handles focus when the screen updates. When a new UIViewController is pushed onto a UINavigationController or presented modally, how does VoiceOver decide which element to focus on? Is there a way to control or customize this behavior? In a UISplitViewController, when an item is selected in the primary view controller, the focus should shift to the relevant content in the secondary view controller. How can we ensure that VoiceOver correctly moves focus to the right element in the secondary panel?
0
0
137
Apr ’25
Attaching procedural audio to an ARKit SCNNode
I’m developing an ARKit application where I aim to attach procedurally generated audio to detected planes in the environment. While using a static audio file with SCNAudioSource and SCNAudioPlayer works as expected, integrating procedural audio via AVAudioSourceNode does not produce any sound, nor does it generate any error messages: Stack Overflow Post Working Implementation with Static Audio File: let audioPlayer = SCNAudioPlayer(source: audioSource) node.addAudioPlayer(audioPlayer) Attempted Implementation with Procedural Audio: // Audio generation code } let audioPlayer = SCNAudioPlayer(avAudioNode: audioNode) node.addAudioPlayer(audioPlayer) In this setup, the AVAudioSourceNode successfully generates audio when connected directly to an AVAudioEngine. However, when used with SCNAudioPlayer and attached to an SCNNode, it fails to produce sound. What doesn’t work is creating some procedural audio with an AVAudioNode, as documented here: Apple docs Additionally, I explored the WWDC18 AR game project, SwiftShot, which utilizes SCNAudioPlayer(avAudioNode:). After updating it for the latest Xcode, the graphics function correctly, but the audio does not play. I also noted that the Apple documentation mentions an audioPlayerWithAVAudioNode: method, stating: Using this initializer is typically not necessary. Instead, call the audioPlayerWithAVAudioNode: method, which returns a cached audio player object if one for the specified AVAudioNode object has already been created and is available for use. However, this method does not appear to be available in Swift. Any insights or guidance on this matter would be greatly appreciated.
0
0
183
Apr ’25
Japanese “Hattori” TTS voice missing from Settings > General > Read & Speak > Voices > Japanese on iOS 26
Japanese “Hattori” TTS voice missing from Settings > General > Read & Speak > Voices > Japanese on iOS 26 Steps: Open the path above → “Hattori” is not listed and cannot be downloaded Expected: Hattori is available to download and select Actual: Hattori is absent from the catalog Regression: Was available on iOS 18.x on the same device
0
0
379
Sep ’25