I've noticed that contextual strings do not work for on-device speech recognition. I've written a Feedback entry: FB7496068
To reproduce: Create a basic app that transcribes speech.
Add “Flubbery Dubbery” or a made up couple of words to a strings array and set it equal to the contextualStrings property of SFSpeechAudioBufferRecognitionRequest
For the recognition request being used, set the requiresOnDeviceRecognition Boolean property to true.
Transcribe audio and say the made up couple of words.
See that the device never correctly transcribes the made up couple of words.
Now set the requiresOnDeviceRecognition Boolean to false.
Transcribe audio and say the made up words.
See that the device correctly transcribes the made up words.
Has anyone else run into this problem? I would love a fix.
PS, I noticed that if you add a custom word as a contact in the Contacts app, then on-device recognition picks it up. So it seems it's possible, just not implemented quite right.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
When creating a speech recognition app, such as the sample app called SpokenWord, and asking for permission, a message appears to the user which says:
‘SpokenWord’ Would like to access Speech Recognition Speech data from this app will be sent to Apple to process your requests. …” However, it is my understanding that settings requiresOnDevice to true means that all audio will be processed on device and not be sent to or used by Apple. Is this correct? Or is data sent to Apple even if requiresOnDevice is set to true?