Post

Replies

Boosts

Views

Activity

Reply to Assistance Needed with Enabling Speech Recognition Entitlement for iOS App
Subject: Clarification on Speech Recognition Capability Requirement for iOS Hi Quinn, The Eskimo Thank you for your reply, and I really appreciate your time. To clarify — I was referring to Apple’s official documentation, including: Asking Permission to Use Speech Recognition https://developer.apple.com/documentation/speech/asking-permission_to_use_speech_recognition Recognizing Speech in Live Audio https://developer.apple.com/documentation/speech/recognizing_speech_in_live_audio While these documents don’t explicitly mention the need to enable the Speech Recognition capability in the Developer Portal, I’ve come across several trusted sources that do suggest it’s required for full and stable functionality. For example: Apple Developer Forum: Thread discussing Speech Framework entitlement https://developer.apple.com/forums/thread/116446 Stack Overflow: Speech recognition capability and entitlement setup https://stackoverflow.com/a/43084875 Both of these sources explain that enabling the Speech Recognition capability — which adds the com.apple.developer.speech entitlement — is necessary to trigger proper permission prompts and ensure voice input works in iOS apps. In my case, I’ve already added the correct keys to Info.plist (NSSpeechRecognitionUsageDescription and NSMicrophoneUsageDescription), and I’m using the official Capacitor SpeechRecognition plugin. The app confirms that permissions are granted, and start() is being called — but there is no voice input being received. If this entitlement is no longer required, I would be deeply grateful for your guidance — because right now, the plugin behaves silently despite all permissions appearing correct. Could you please confirm what the current requirement is? Warm regards, Daniel
Jun ’25
Reply to Assistance Needed with Enabling Speech Recognition Entitlement for iOS App
I was referring to the official Apple documentation for the Speech framework, specifically: Asking Permission to Use Speech Recognition https://developer.apple.com/documentation/speech/asking-permission_to_use_speech_recognition Recognizing Speech in Live Audio https://developer.apple.com/documentation/speech/recognizing_speech_in_live_audio While these pages do not explicitly mention enabling the Speech Recognition capability in the Developer Portal, several other trusted sources — including community forums and prior Apple Developer Tech Support responses — have indicated that adding this capability in Xcode (which creates the com.apple.developer.speech entitlement) is required for full and stable functionality. If that is no longer the case, I would be deeply grateful for your clarification — especially since we are still experiencing permission issues and silence from the speech plugin in a properly configured app. Could you please help me with a way to set up speech on my app?
May ’25
Reply to Assistance Needed with Enabling Speech Recognition Entitlement for iOS App
Thank you for your response. I’m currently using the @capacitor-community/speech-recognition plugin for iOS in a Capacitor-based app. Under the hood, this plugin uses Apple’s Speech framework (SFSpeechRecognizer, AVAudioEngine, etc.) to perform live speech recognition. Based on Apple’s documentation, I understood that to use this framework, the app needs the following entitlements: NSSpeechRecognitionUsageDescription (Info.plist) NSMicrophoneUsageDescription (Info.plist) Speech Recognition Capability enabled in the Apple Developer Portal However, even after enabling these, my app does not receive permission prompts, and no speech input is captured. Could you kindly confirm if Speech Recognition requires an explicit entitlement or capability setup in App IDs or elsewhere — or if there’s something I might have missed? or do you have any advice on how I should proceed. I’d be very grateful for your guidance.
May ’25
Reply to Speech Recognition Entitlement Not Appearing in App ID Capabilities
Hi, thank you for your help. Yes, I have followed all of the instructions linked in both the “Asking Permission to Use Speech Recognition” and “Recognizing Speech in Live Audio” guides. I have also added both the required keys to my Info.plist file: NSSpeechRecognitionUsageDescription NSMicrophoneUsageDescription I’ve rebuilt the entire project using a new App ID (com.echo.eyes.app) after enrolling in the Apple Developer Program, cleaned the iOS platform, reinstalled CocoaPods, and verified that the plugin is working correctly inside a real device test (not a browser). However, I’m still not seeing any prompt for microphone or speech recognition, and the SpeechRecognition.available() plugin method returns false, indicating that speech services are still not available to the app. Could you please confirm whether the Speech Recognition entitlement needs to be manually enabled by Apple, and if so, how I can request it? I previously reached out to Developer Technical Support (DTS), but I was not given a tracking number — it may have been submitted through the Developer Program support instead. Please let me know if there’s any way to track that request or escalate this. Thanks again for your support — I really appreciate your time. Best regards, Daniel
May ’25