Same issue here. Root cause is in the current SDK types.
SpeechAnalyzer expects an array of any SpeechModule:
// SpeechAnalyzer.swift (SDK)
public convenience init(modules: [any SpeechModule], options: SpeechAnalyzer.Options? = nil)
SpeechTranscriber does conform (indirectly) to SpeechModule:
// SpeechTranscriber.swift (SDK)
@available(macOS 26.0, iOS 26.0, *)
final public class SpeechTranscriber : LocaleDependentSpeechModule {
public convenience init(locale: Locale, preset: SpeechTranscriber.Preset)
}
…but SpeechDetector is a final class with no SpeechModule conformance:
// SpeechDetector.swift (SDK)
@available(macOS 26.0, iOS 26.0, *)
final public class SpeechDetector {
public init(detectionOptions: SpeechDetector.DetectionOptions, reportResults: Bool)
public convenience init()
}
Apple’s docs say we should be able to do:
let transcriber = SpeechTranscriber(...)
let speechDetector = SpeechDetector()
let analyzer = SpeechAnalyzer(modules: [speechDetector, transcriber])
but currently that’s impossible, since SpeechDetector doesn’t conform to SpeechModule (and it’s final, so we can’t adapt it ourselves).
Workaround: initialize SpeechAnalyzer with the transcriber only, and implement voice-activation externally (e.g., auto-stop on inactivity). Waiting on clarification/fix from Apple whether SpeechDetector is intended to conform to SpeechModule in iOS 26.
Topic:
Media Technologies
SubTopic:
Audio
Tags: