Unfortunately, I do not have a machine that supports Monterey.
I've been seeking to utilize the buffer directly, as well as the pronunciation capabilities in AVSpeechSynthesizer.
I've spent a couple of hard days, narrowing down what I thought was the problem, to determine if it is my code or is it a framework bug.
My old code using NSSpeechSynthesizer ("success = synth.startSpeaking( stringToSpeak, to: toURL ") wrote the buffer to disk, then I read it back.
If I know for sure that its a framework bug, I'll work around it by writing to disk, then reading it back, which I think will work in this instance.
Topic:
Machine Learning & AI
SubTopic:
General
Tags: