I am building a CarPlay navigation app and I would like it to be as hands free as possible.
I need a better understanding of how Siri can work with CarPlay and if the direction I need to go is using Intents or App Shortcuts.
My goal is to be able to have the user speak to Siri and do things like "open settings" or "zoom in map" and then call a func in my app to do what the user is asking.
Does CarPlay support this? Do I need to use App Intents or App Shortcuts or something else?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
With CarPlay, is it possible to programmatically know which side of the screen the status bar is placed on?
I am having an issue with the code that I posted below. I capture voice in my CarPlay app, then allow the user to have it read back to them using AVSpeechUtterance.
This works fine on some cars, but many of my beta testers report no audio being played. I have also experienced this in a rental car where the audio was either too quiet or the audio didn't play.
Does anyone see any issue with the code that I posted? This is for CarPlay specifically.
class CarPlayTextToSpeechService: NSObject, ObservableObject, AVSpeechSynthesizerDelegate {
private var speechSynthesizer = AVSpeechSynthesizer()
static let shared = CarPlayTextToSpeechService()
/// Completion callback
private var completionCallback: (() -> Void)?
override init() {
super.init()
speechSynthesizer.delegate = self
}
func configureAudioSession() {
do {
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .voicePrompt, options: [.duckOthers, .interruptSpokenAudioAndMixWithOthers, .allowBluetoothHFP])
} catch {
print("Failed to set audio session category: \(error.localizedDescription)")
}
}
public func speak(_ text: String, completion: (() -> Void)? = nil) {
self.configureAudioSession()
// Store the completion callback
self.completionCallback = completion
Task(priority: .high) {
let speechUtterance = AVSpeechUtterance(string: text)
let langCode = Locale.preferredLocalLanguageCountryCode
if langCode == "en-US" {
speechUtterance.voice = AVSpeechSynthesisVoice(identifier: AVSpeechSynthesisVoiceIdentifierAlex)
} else {
speechUtterance.voice = AVSpeechSynthesisVoice(language: langCode)
}
try AVAudioSession.sharedInstance().setActive(true, options: .notifyOthersOnDeactivation)
speechSynthesizer.speak(speechUtterance)
}
}
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) {
Task {
stopSpeech()
try AVAudioSession.sharedInstance().setActive(false)
}
// Call completion callback if available
self.completionCallback?()
self.completionCallback = nil
}
func stopSpeech() {
speechSynthesizer.stopSpeaking(at: .immediate)
}
}
With the announcement of the new Apple Translate app and offline support, are there any URL schemes that we can use in our apps to send text to the Translate app and have it open and translate the sent text?
In my navigation CarPlay app I am needing to capture voice input and process that into text. Is there a built in way to do this in CarPlay?
I did not find one, so I used the following, but I am running into issues where the AVAudioSession will throw an error when I am trying to set active to false after I have captured the audio.
public func startRecording(completionHandler: @escaping (_ completion: String?) -> ()) throws {
// Cancel the previous task if it's running.
if let recognitionTask = self.recognitionTask {
recognitionTask.cancel()
self.recognitionTask = nil
}
// Configure the audio session for the app.
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.record, mode: .default, options: [.duckOthers, .interruptSpokenAudioAndMixWithOthers])
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
let inputNode = self.audioEngine.inputNode
// Create and configure the speech recognition request.
self.recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
guard let recognitionRequest = self.recognitionRequest else { fatalError("Unable to created a SFSpeechAudioBufferRecognitionRequest object") }
recognitionRequest.shouldReportPartialResults = true
// Keep speech recognition data on device
recognitionRequest.requiresOnDeviceRecognition = true
// Create a recognition task for the speech recognition session.
// Keep a reference to the task so that it can be canceled.
self.recognitionTask = self.speechRecognizer.recognitionTask(with: recognitionRequest) { result, error in
var isFinal = false
if let result = result {
// Update the text view with the results.
let textResult = result.bestTranscription.formattedString
isFinal = result.isFinal
let confidence = result.bestTranscription.segments[0].confidence
if confidence > 0.0 {
isFinal = true
completionHandler(textResult)
}
}
if error != nil || isFinal {
// Stop recognizing speech if there is a problem.
self.audioEngine.stop()
do {
try audioSession.setActive(false, options: .notifyOthersOnDeactivation)
} catch {
print(error)
}
inputNode.removeTap(onBus: 0)
self.recognitionRequest = nil
self.recognitionTask = nil
if error != nil {
completionHandler(nil)
}
}
}
// Configure the microphone input.
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in
self.recognitionRequest?.append(buffer)
}
self.audioEngine.prepare()
try self.audioEngine.start()
}
Again, is there a build-in method to capture audio in CarPlay? If not, why would the AVAudioSession throw this error?
Error Domain=NSOSStatusErrorDomain Code=560030580 "Session deactivation failed" UserInfo={NSLocalizedDescription=Session deactivation failed}