It’s very annoying but on my iPhone 12 Pro I keep getting the accessibility app with the microphone on and it keeps opening the app by itself and it’s a blank screen and every time I close it it just reopens. I don’t know why it keeps doing this, but it drives me crazy. Does anyone know what else to do? I also have the beta iOS 26 but it’s been doing this even with the past update.
General
RSS for tagExplore best practices for creating inclusive apps that cater to users with diverse abilities
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I’m developing an ARKit application where I aim to attach procedurally generated audio to detected planes in the environment. While using a static audio file with SCNAudioSource and SCNAudioPlayer works as expected, integrating procedural audio via AVAudioSourceNode does not produce any sound, nor does it generate any error messages: Stack Overflow Post
Working Implementation with Static Audio File:
let audioPlayer = SCNAudioPlayer(source: audioSource)
node.addAudioPlayer(audioPlayer)
Attempted Implementation with Procedural Audio:
// Audio generation code
}
let audioPlayer = SCNAudioPlayer(avAudioNode: audioNode)
node.addAudioPlayer(audioPlayer)
In this setup, the AVAudioSourceNode successfully generates audio when connected directly to an AVAudioEngine. However, when used with SCNAudioPlayer and attached to an SCNNode, it fails to produce sound. What doesn’t work is creating some procedural audio with an AVAudioNode, as documented here:
Apple docs
Additionally, I explored the WWDC18 AR game project, SwiftShot, which utilizes SCNAudioPlayer(avAudioNode:). After updating it for the latest Xcode, the graphics function correctly, but the audio does not play. I also noted that the Apple documentation mentions an audioPlayerWithAVAudioNode: method, stating:
Using this initializer is typically not necessary. Instead, call the audioPlayerWithAVAudioNode: method, which returns a cached audio player object if one for the specified AVAudioNode object has already been created and is available for use.
However, this method does not appear to be available in Swift. Any insights or guidance on this matter would be greatly appreciated.
We are developing Apple AI for foreign markets and adapting it for iPhone models 17 and above.
When the system language and Siri language are not the same—for example, if the system is in English and Siri is in Chinese—it can cause a situation where Apple AI cannot be used. So, may I ask if there are any other reasons that could cause Apple AI to be unavailable within the app, even if it has been enabled?
佩戴者头部自然晃动时,设备拍摄的画面会出现明显抖动,导致观看直播的用户产生眩晕感,严重影响直播沉浸体验和购物决策效率。
希望
优化设备内置防抖算法,降低头部常规晃动对画面稳定性的影响,提升直播画面的流畅度。
Hi everyone,
My team and I are developing an accessibility-focused VisionOS app (MindTap) as part of a university project, aiming to support individuals with Locked-In Syndrome using Brain-Computer Interface (BCI) signals to trigger interactions (e.g., tapping) within the Apple Vision Pro environment.
Problem 1: Simulating Eye Tracking in Simulator
We are testing onHover with Send pointer to the device under I/O > Input in the simulator, and while it mostly works (a bit laggy), we found that onHover won't function on the actual Vision Pro hardware. From what I understand, we should be using FocusState for proper gaze interaction, but testing this requires the physical device. Is there any workaround or official Apple-recommended way to simulate Focus-based gaze detection without a real Vision Pro?
Problem 2: WebSocket-triggered "Click" doesn't work outside the app
We successfully use WebSocket to send a custom signal (a "1" from the brain signal device) to trigger an action inside our app. However, when the user opens a third-party app like Apple News, the WebSocket-triggered "click" no longer works.
We suspect this is due to sandbox restrictions or lack of system-level permissions.
Is it possible in anyway to:
Trigger interaction events outside the app using custom input (like BCI via Websocket)?
Access system-wide click/tap simulation APIs from within VisionOS apps
Integrate this with accessibility services (like Voice Control or AssistiveTouch)
We'd appreciate any official guidance or tips from others building similar accessibility apps with alternative input methods in VisionOS.
Thanks in advance for any insight you can provide!
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
Xcode
Accessibility
iPad and iOS apps on visionOS
I’m using a 9th gen iPad, updated iPadOS 26 a few weeks ago, but I can’t use the 3D background, can’t find any way to use the 3D background. Is it a system issue or is it my iPad’s issue?
I have subscribed to the developer program, but it’s already been a day and it still shows “is not enrolled in the Apple Developer Program.”
Topic:
Accessibility & Inclusion
SubTopic:
General
Japanese “Hattori” TTS voice missing from Settings > General > Read & Speak > Voices > Japanese on iOS 26
Steps: Open the path above → “Hattori” is not listed and cannot be downloaded
Expected: Hattori is available to download and select
Actual: Hattori is absent from the catalog
Regression: Was available on iOS 18.x on the same device
After updating to the iOS 26 Beta version, the screenshot option within the AssistiveTouch menu has stopped working. Tapping on the "Screenshot" icon does not perform any action.
Topic:
Accessibility & Inclusion
SubTopic:
General
Hello,
I'm currently unable to access App Store Connect. When I try to open https://appstoreconnect.apple.com, I receive the following error message:
“appstoreconnect.apple.com is currently unable to handle this request.”
I’ve tried the following steps, but the issue persists:
Cleared browser cache and cookies
Tried different browsers (Safari, Chrome)
Attempted from multiple devices and networks
Is this a known issue or is there any workaround available?
Would appreciate any help or update on the current status.
Thank you,
Topic:
Accessibility & Inclusion
SubTopic:
General
Please update Accessibility OS Settings for VoiceOver in iPhone iOS and iPadOS to include frames on the Rotor, and to make web navigation and component gestures easier to find and assign. Please add content to the iPhone and iPad Apple User Guide to use VoiceOver in web navigation with touch gestures.
Specifically... iframes.
There is no clear guidance in Apple documentation for VoiceOver users in iPhone or iPadOS to access iframes with touch gestures. A common belief as written on AppleVis, other blogs, and internet searches is that iframes in Safari or a webView in an app are only available with explore by touch.
If explore by touch is the only option for some interactions, that needs to be included in Apple User Guides. If not, details on equivalent touch gestures for VO that have keyboard interactions in Mac need to be clear for users.
VoiceOver for Mac includes a default keyboard interaction of VO-Command-F in its extensive User Guide (https://support.apple.com/guide/voiceover/by-images-or-frames-mchlp2740/mac). A user can include a rotor option for web navigation for iframes.
VoiceOver for iPhone and iPad does not include a default swipe gesture assigned to frames. An option is not available for the Rotor.
While there is iPhone User Guide guidance that gestures can be customized (https://support.apple.com/guide/iphone/customize-gestures-and-keyboard-shortcuts-iph59a8e6fd2/18.0/ios/18.0), it is not clear that for adding this gesture, "Move to the next frame" is tucked into the advanced navigation commands for VoiceOver Accessibility Settings in the OS. At least in my phone, the word "frame" was not searchable despite the All Commands screen using a search bar.
I am facing issue of back camera in my iphone 14 plus it is showing the black screen and my iphone is manufacture between april 2023 to april 2024 but its still not eligible for apple program my phone is also getting same issue why its not eligible for it
Topic:
Accessibility & Inclusion
SubTopic:
General
Hello
So if you use the Bulgarian keyboard, you get these characters:
явертъуиопюасдфгхйклшщзьцжбнмч
This isn’t really right for Bulgaria, because т should look like m, and д should look like g, and other characters should look like rotated or mirrored Latin characters. E.g., г should look like a backwards s.
Compare the Bulgaria Wikipedia page in Bulgarian: https://bg.m.wikipedia.org/wiki/%D0%91%D1%8A%D0%BB%D0%B3%D0%B0%D1%80%D0%B8%D1%8F
with the Bulgaria Wikipedia page in Russian: https://ru.m.wikipedia.org/wiki/%D0%91%D0%BE%D0%BB%D0%B3%D0%B0%D1%80%D0%B8%D1%8F
Notice that the letters are different.
Anyhow, the ios Bulgarian font is just Russian Cyrillic, and that seems like an unintended bug rather than an intentional stylistic choice, basically.
Topic:
Accessibility & Inclusion
SubTopic:
General
Hello!
I was faced with unexpected behavior of hardware keyboard focus in UITests.
A clear description of the problem
When running UITests on the iOS Simulator with both "Full Keyboard Access" and "Connect Hardware Keyboard" options enabled, there is a noticeable delay between keyboard actions for focus managing (like pressing Tab or arrow keys). The delay seems to increase with repeated input and suggests that events are being queued instead of processed immediately.
I will describe why I have such an assumption later.
A step-by-step set of instructions to reproduce the problem
Launch the iOS Simulator.
Enable both "Full Keyboard Access" and "Connect Hardware Keyboard" in the Simulator settings.
Run a UITest on a target application (ideally an endless or long-running test).
Once the app is launched, press the Tab key several times.
Observe the delay in focus movement.
Optionally, press the Tab or arrow keys rapidly, then stop the UITest.
After stopping, you’ll see a burst of rapid focus changes.
What results you expected
We expected keyboard actions (like Tab) to be handled immediately and the UI focus to update smoothly during UITests.
What results you saw
There was a 4–10 (end more) second delay between pressing keys and seeing a response. All stacked keyboard events (used for managing focus) are performed all at once after stopping the UITest.
The version of Xcode you are using
Xcode: Version 16.3 (16E140)
Simulator: iPhone 16 Pro (iOS 18.4 and 18.1)
Simulator: iPad Pro 11-inch (M4) (iPadOS 17.5)
I’d love to see Apple implement a Bionic Reading feature as a system-wide accessibility option. This type of reading aid highlights the first part of each word in bold to help guide the eyes and improve comprehension.
It’s been shown to be especially helpful for people with ADHD, dyslexia, and other neurodivergent needs. Having a toggle in Settings > Accessibility would be life-changing.
Ideally, it could be:
• Enabled system-wide, or per-app
• Allow customization of how much of the word is bolded
• Available in Safari, Messages, Books, News, etc.
I have the following method to insert @mentions to a text field:
func insertMention(user: Token, at range: NSRange) -> Void {
let tokenImage: UIImage = renderMentionToken(text: "@\(user.username)")
let attachment: NSTextAttachment = NSTextAttachment()
attachment.image = tokenImage
attachment.bounds = CGRect(x: 0, y: -3, width: tokenImage.size.width, height: tokenImage.size.height)
attachment.accessibilityLabel = user.username
attachment.accessibilityHint = "Mention of \(user.username)"
let attachmentString: NSMutableAttributedString = NSMutableAttributedString(attributedString: NSAttributedString(attachment: attachment))
attachmentString.addAttribute(.TokenID, value: user.id, range: NSRange(location: 0, length: 1))
attachmentString.addAttribute(.Tokenname, value: user.username, range: NSRange(location: 0, length: 1))
let mutableText: NSMutableAttributedString = NSMutableAttributedString(attributedString: textView.attributedText)
mutableText.replaceCharacters(in: range, with: attachmentString)
mutableText.append(NSAttributedString(string: " "))
textView.attributedText = mutableText
textView.selectedRange = NSRange(location: range.location + 2, length: 0)
mentionRange = nil
tableView.isHidden = true
}
When I use XCode's accessibility inspector to inspect the text input, the inserted token is not read by the inspector - instead a whitespace is shown for the token. I want to set the accessibility-label to the string content of the NSTextAttachment. How?
I have a TextField and entered for example "sg?!". At the TextField I set the modifier speechAlwaysIncludesPunctuation(). But when I activate VoiceOver the content of TextField is reading. The special characters don't read out.
How can I fix this?
We have a requirement to manage the shortcuts and hotkeys in our application, and have it to be intuitive and support multi-lingual fully. The understanding that we have currently is that most universal shortcuts and hotkeys on MacOS/iOS are expressed using English/Latin characters’ – and now, when a ‘pure foreign language physical or virtual keyboard’ is the ‘input device’ – we are unclear how the user would invoke such a hotkey.
Now, considering cases where other language keyboards have no Latin characters, in these environments, managing shortcuts and hotkeys becomes a rather difficult task. Taking a very simple example, the shortcut for Printing a page is Command/Control + 'P'. This can be an issue on Non English character keyboards like Arabic, where not only are there no letters for P, there is also no equivalent phonetic character as well, since the language itself does not have it.
Also – when we are wanting customizability of a hotkey by the user, how would the user express ‘which is the key combination for a given action they want to perform’.
So, based on these conditions, in order to provide the most comprehensive and optimal experience for the user in their own language, what is it that Apple recommend we do here, for Hotkeys/Shortcuts support in Pure Languages
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
InputMethodKit
Internationalization
Shortcuts
Localization
I have a question about Developer Mode on iPhone.
Currently, the home button on my iPhone SE (2nd generation) is broken, so I use AssistiveTouch to display a virtual home button. However, in Developer Mode, the virtual home button does not appear, making it impossible to enable Developer Mode.
Is there any way to enable Developer Mode in this situation?
I downloaded the official camera sample code(https://developer.apple.com/tutorials/sample-apps/capturingphotos-camerapreview )it's a .swiftpm package and created a SwiftUI project. I copied the official sample code into this new project, build it, and ran it on an iPhone 13 for testing. I found that there were black empty areas on the top and bottom of the application interface, which means that the application interface cannot be previewed in full screen. I have tried many methods but cannot preview in full screen. How can I modify the code?