I've been working with SpeechAnalyzer.start(inputSequence:) on macOS 26 and got streaming transcription working. A few things that might help:
Make sure the AVAudioFormat you use to create AnalyzerInput buffers exactly matches what bestAvailableAudioFormat() returns. Even subtle mismatches (e.g., interleaved vs non-interleaved, different channel layouts) can cause the nilError without a descriptive message.
I found that feeding buffers that are too small (< 4096 frames) occasionally triggers this error. Try using larger chunks â I settled on 8192 frames per buffer.
The bufferStartTime parameter needs to be monotonically increasing and consistent with the actual audio duration. If there are gaps or overlaps in the timestamps, the stream mode can fail silently or throw nilError.
Instead of replaying a WAV file as chunked buffers, I'd suggest testing with live audio from AVCaptureSession first. In my experience, live capture â AnalyzerInput works more reliably than simulated streaming from a file, possibly because the timing is naturally correct.
Worth noting that DictationTranscriber handles streaming input differently from SpeechTranscriber. If your use case allows it, try switching to DictationTranscriber â it also supports AnalysisContext for contextual vocabulary biasing (which SpeechTranscriber currently does not, per an Apple engineer's response in ).
The macOS 26 Speech framework is still quite new and under-documented. Filing the Feedback Assistant report was the right call.
Topic:
Media Technologies
SubTopic:
Audio
Tags: