This is a rewriting of my original question, once I realized I was reading the flow incorrectly.
Working on a speech to text demo, which works. But I am still trying to learn the flow of Swift. While I may be calling it incorrectly, I'm looking at the closure in node.installTap as a C callback function. When the buffer is full, the code within the closure is called.
From what I interpret here, every time the buffer becomes full, the closure from within the node.installTap runs.
What I can't figure out is what triggers the closure within:
task = speechRecognizer?.recognitionTask(with: request, resultHandler: {})
The entire demo below works, am just trying to figure out how the AVAudioEngine knows when to call that second closure. Is there some connection?
func startSpeechRecognition (){
let node = audioEngine.inputNode
let recordingFormat = node.outputFormat(forBus: 0)
node.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat)
{ (buffer, _) in self.request.append(buffer) }
audioEngine.prepare()
do {
try audioEngine.start()
} catch let error {
...
}
guard let myRecognition = SFSpeechRecognizer() else {
...
return
}
if !myRecognition.isAvailable {
...
}
task = speechRecognizer?.recognitionTask(with: request, resultHandler:
{ (response, error) in guard let response = response else {
if error != nil {
print ("\(String(describing: error.debugDescription))")
} else {
print ("problem in repsonse")
}
return
}
let message = response.bestTranscription.formattedString
print ("\(message)")
})
}