Hello All,
I want to implement Text-to-Speech (TTS) and vibration functionality when a push notification arrives. In my app, I am already using Critical Alerts, and the critical alert sound plays correctly in all app states.
However, I need to confirm whether it is possible to trigger Text-to-Speech and custom vibration in all app states:
Foreground
Background
Terminated (killed) state
My Questions:
Is it technically possible for iOS to run Text-to-Speech (using AVSpeechSynthesizer) when a critical alert notification arrives in background or terminated state?
Is it possible to trigger custom vibration patterns from a critical alert when the app is not running?
If yes, can someone please provide guidance or sample code on how to implement this?
If no, can Apple explain the limitation or provide documentation confirming that TTS and vibration cannot be triggered in background/kill states?
What works currently:
TTS and vibration only work in foreground when the app is active.
Critical alert sound works correctly in all states.
I want confirmation on whether iOS supports background/terminated TTS and vibration, or if this is a platform restriction even when using Critical Alerts.
Thank you!
Siri and Voice
RSS for tagHelp users quickly accomplish tasks related to your app using just their voice.
Posts under Siri and Voice tag
60 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi everyone,
I’m currently experimenting with App Intents and I’m trying to customize the section titles that appear at the top of groups of intents inside the Shortcuts app UI.
For example, in the Phone shortcut, there are built-in sections such as “Call Favorite Contacts” and “Call Recent Contacts” (see screenshot attached).
Apple’s own system apps such as Phone, Notes, and FaceTime seem to have fully custom section headers inside Shortcuts with icon.
My question is:
👉 Is there an API available that allows third-party apps to define these titles (or sections) programmatically?
I went through the AppIntents and Shortcuts documentation but couldn’t find anything.
From what I can tell, this might be private / Apple-only behavior, but I’d be happy to know if anybody has found a supported solution or a recommended alternative.
Has anyone dealt with this before?
Thanks!
Mickaël 🇫🇷
Topic:
App & System Services
SubTopic:
Automation & Scripting
Tags:
Siri and Voice
Shortcuts
Intents
App Intents
Hi,
I am developing a music app. We are using siri media search functionality for a while. We recently had a case where siri would not provide keyword for a search.
When user speaks "Play Kid songs" (in Turkish, çocuk şarkıları çal), when I debug I see mediaSearch.mediaName is nil.
When user speaks "Play Kids" (in Turkish, çocuklar çal) a keyword is given and we can search and play related song.
Normally I would think that siri is somehow censoring the word "Kid". But when i try the same voice search in Spotify, I get a children song search result.
I've read documentations and searched web but couldnt find any similar experience.
What would be the cause, is there an extra setting for this kind of behaviour. What would be the cause or a different capability that Spotify can get a keyword out of this voice search but not us?
Topic:
App & System Services
SubTopic:
Automation & Scripting
Tags:
Siri and Voice
SiriKit
App Intents
My app used app intents. And when user said "Prüfung der Bluetooth Funktion", screen can show the whole words. But in my app, it only can get "Bluetooth Funktion". This behaviour only happened in German version. In English version, everything worked well.
Is anyone can support me? Why German version siri cut my words?
System settings => Accessibility => System Voice => the little (i) beside the pulldown => Voices => THIS SCREEN will allow you to download Premium Voices
Is there a way to trigger this screen programmatically. Or at least a link to get my users there without having to dig thru that swamp of screens?
The question:
Is there any chance that Apple will integrate Intune SDK into Apple apps such as Mail or Calendar, or create Siri-compatible Intune SDK-integrated versions of Mail and Calendar?
The reason for the question:
My team has been asked by VIPs in our company (e.g. execs and board members) if Siri can be used with Outlook, and the only way is through Shortcuts or by adding the Outlook account to Mail.
Both of these options would violate our security policies for these reasons:
Since our company policy and federal regulations don't permit us to allow access to company resources on non-MAM-protected apps, we can't allow our users to login to the Mail app and make full use of Siri, due to the lack of MAM controls for Mail and Calendar.
We only allow users to transfer data between policy-managed apps which have integrated the Intune SDK allowing us to enforce DLP and other security measures. The only way to enable Shortcuts would be to disable these security measures.
Topic:
Business & Education
SubTopic:
General
Tags:
Mobile Core Services
Enterprise
Siri and Voice
Shortcuts
When my AppShortcut phrase is:
"Go (.$direction) with (.applicationName)"
Then everything works correctly, the AppIntent correctly receives the parameter. But when my phrase is:
"What is my game (.$direction) with (.applicationName)"
The an alert dialog pops up saying:
"Hey siri what is my game tomorrow with {app name}
Do you want me to use ChatGPT to answer that?"
The phrase is obviously heard correctly, and it's exactly what I've specified in the AppShortcut. Why isn't it being sent to my AppIntent?
import Foundation
import AppIntents
@available(iOS 17.0, *)
enum Direction: String, CaseIterable, AppEnum {
case today, yesterday, tomorrow, next
static var typeDisplayRepresentation: TypeDisplayRepresentation {
TypeDisplayRepresentation(name: "Direction")
}
static var caseDisplayRepresentations: [Direction: DisplayRepresentation] = [
.today: DisplayRepresentation(title: "today", synonyms: []),
.yesterday: DisplayRepresentation(title: "yesterday", synonyms: []),
.tomorrow: DisplayRepresentation(title: "tomorrow", synonyms: []),
.next: DisplayRepresentation(title: "next", synonyms: [])
]
}
@available(iOS 17.0, *)
struct MoveItemIntent: AppIntent {
static var title: LocalizedStringResource = "Move Item"
@Parameter(title: "Direction")
var direction: Direction
func perform() async throws -> some IntentResult {
// Logic to move item in the specified direction
print("Moving item \(direction)")
return .result()
}
}
@available(iOS 17.0, *)
final class MyShortcuts: AppShortcutsProvider {
static let shortcutTileColor = ShortcutTileColor.navy
static var appShortcuts: [AppShortcut] {
AppShortcut(
intent: MoveItemIntent()
, phrases: [
"Go \(\.$direction) with \(.applicationName)"
// "What is my game \(\.$direction) with \(.applicationName)"
]
, shortTitle: "Test of direction parameter"
, systemImageName: "soccerball"
)
}
}
Topic:
App & System Services
SubTopic:
Automation & Scripting
Tags:
Siri and Voice
SiriKit
App Intents
We've identified a regression in iOS 26.0 and 26.1 Beta 4 where AVSpeechSynthesisVoice(language:) no longer respects user-selected voices from Accessibility settings.
Issue: When users select a specific voice in Settings → Accessibility → Spoken Content → Voices, calling AVSpeechSynthesisVoice(language:) returns the system default voice instead of the user's selection. This worked correctly in iOS 18.6.2.
Particularly affects:
Third-party speech synthesis voices (CereProc, Grammatek, etc.)
Apps relying on automatic voice selection based on user preferences
Example:
// User selected CereProc Heather for en-GB in Accessibility settings
let voice = AVSpeechSynthesisVoice(language: "en-GB")
print(voice?.name) // iOS 18.6.2: "HEATHER", iOS 26: "Daniel" (system default)
Interesting observation: The new Accessibility Reader feature in iOS 26 correctly uses the user-selected voice, but Tap to Speak and the API both ignore the setting.
Tested methods:
AVSpeechSynthesisVoice(language:)
AVSpeechUtterance auto-selection
Reflection for new APIs
All return the system default voice, not the user's preference.
Filed: FB[20271264]
Has anyone else encountered this? Any known workarounds to programmatically access the user's preferred voice selection?
Hey dear developers!
This post should be available for the future Siri updates and improvements but also for wishes in this forum so that everyone can share their opinion and idea please stay friendly. have fun! I had already thought about developing a demo app to demonstrate my idea for a better Siri.
My change of many:
Wish Update: Siri's language recognition capabilities have been significantly enhanced. Instead of manually setting the language, Siri can now automatically recognize the language you intend to use, making language switching much more efficient. Simply speak the language you want to communicate in, and Siri will automatically recognize it and respond accordingly. Whether you speak English, German, or Japanese, Siri will respond in the language you choose.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
iPhone
Siri Event Suggestions Markup
Siri and Voice
Apple Intelligence
Hi everyone 👋
I’m building an iOS app in Swift where I want to do the following:
Record the user’s voice
Transcribe the spoken sentence (speech-to-text)
Auto-detect the spoken language
Translate it to another language selected by the user (e.g., English → Spanish or Hindi → English)
Speak back (text-to-speech) the translated text on the same device
Is this possible to record via phone mic and play the transcribe voice into headphone's audio?
I'm currently testing this on a physical device (12 Pro Max, iOS 26). Through shortcuts, I know for a fact that I am able to successfully trigger the perform code to do what's needed. In addition, if I just tell siri the phrase without my unit parameter, and it asks me which unit, I am able to, once again, successfully call my perform. The problem is any of my phrases that I include my unit, it either just opens my application, or says "I can't understand"
Here is my sample code:
My Entity:
import Foundation
import AppIntents
struct Unit: Codable, Identifiable {
let nickname: String
let ipAddress: String
let id: String
}
struct UnitEntity: AppEntity {
static var typeDisplayRepresentation: TypeDisplayRepresentation {
TypeDisplayRepresentation(
name: LocalizedStringResource("Unit", table: "AppIntents")
)
}
static let defaultQuery = UnitEntityQuery()
// Unique Identifer
var id: Unit.ID
// @Property allows this data to be available to Shortcuts, Siri, Etc.
@Property var name: String
// By not including @Property, this data is NOT used for queries.
var ipAddress: String
var displayRepresentation: DisplayRepresentation {
DisplayRepresentation(
title: "\(name)"
)
}
init(Unit: Unit) {
self.id = Unit.id
self.ipAddress = Unit.ipAddress
self.name = Unit.nickname
}
}
My Query:
struct UnitEntityQuery: EntityQuery {
func entities(for identifiers: [UnitEntity.ID]) async throws -> [UnitEntity] {
print("[UnitEntityQuery] Query for IDs \(identifiers)")
return UnitManager.shared.getUnitUnits()
.map { UnitEntity(Unit: $0) }
}
func suggestedEntities() async throws -> [UnitEntity] {
print("[UnitEntityQuery] Request for suggested entities.")
return UnitManager.shared.getUnitUnits()
.map { UnitEntity(Unit: $0) }
}
}
UnitsManager:
class UnitManager {
static let shared = UnitManager()
private init() {}
var allUnits: [UnitEntity] {
getUnitUnits().map { UnitEntity(Unit: $0) }
}
func getUnitUnits() -> [Unit] {
guard let jsonString = UserDefaults.standard.string(forKey: "UnitUnits"),
let data = jsonString.data(using: .utf8) else {
return []
}
do {
return try JSONDecoder().decode([Unit].self, from: data)
} catch {
print("Error decoding units: \(error)")
return []
}
}
func contactUnit(unit: UnitEntity) async -> Bool {
// Do things here...
}
}
My AppIntent:
import AppIntents
struct TurnOnUnit: AppIntent {
static let title: LocalizedStringResource = "Turn on Unit"
static let description = IntentDescription("Turn on an Unit")
static var parameterSummary: some ParameterSummary {
Summary("Turn on \(\.$UnitUnit)")
}
@Parameter(title: "Unit Unit", description: "The Unit Unit to turn on")
var UnitUnit: UnitEntity
func perform() async throws -> some IntentResult & ProvidesDialog {
//... My code here
}
}
And my ShortcutProvider:
import Foundation
import AppIntents
struct UnitShortcuts: AppShortcutsProvider {
static var appShortcuts: [AppShortcut] {
AppShortcut(
intent: TurnOnUnit(),
phrases: [
"Start an Unit with \(.applicationName)",
"Using \(.applicationName), turn on my \(\.$UnitUnit)",
"Turn on my \(\.$UnitUnit) with \(.applicationName)",
"Start my \(\.$UnitUnit) with \(.applicationName)",
"Start \(\.$UnitUnit) with \(.applicationName)",
"Start \(\.$UnitUnit) in \(.applicationName)",
"Start \(\.$UnitUnit) using \(.applicationName)",
"Trigger \(\.$UnitUnit) using \(.applicationName)",
"Activate \(\.$UnitUnit) using \(.applicationName)",
"Volcano \(\.$UnitUnit) using \(.applicationName)",
],
shortTitle: "Turn on unit",
systemImageName: "bolt.fill"
)
}
}
Topic:
App & System Services
SubTopic:
Widgets & Live Activities
Tags:
Siri and Voice
Intents
App Intents
I found what might be a bug with enabling Apple Intelligence when switching languages. When my iPhone's language is set to Catalan, the Apple Intelligence is disabled because it is not available for that language. Switching to Spanish doesn't activate it, and it still shows the same message of being unavailable, this time saying not available in Spanish (which is not true). However, it is enabled when the phone is rebooted.
Once at this point, the bug becomes even weirder. Having the iPhone language set to Spanish and with Apple Intelligence on, I switch the language to Catalan, and the feature remains enabled. After I ask a query in Catalan, it surprisingly understands it and works, but then it gets disabled.
Apart from that, as user feedback, I would love to activate Apple Intelligence in an available language other than my device's language. That's how I always used Siri (iPhone in Catalan, Siri in Spanish).
Thanks!
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
Siri and Voice
Internationalization
Localization
Apple Intelligence
I have a food logging app. I want my users to be able to say something like:
"Hey siri, log chicken and rice for lunch"
But appshortcuts provider is forcing me to add the name of the app to the phrase so it becomes:
"Hey siri, log chicken and rice for lunch in FoodLogApp".
After running a quick survey, I've found that many users dislike having to say the name of the app, it makes it too cumbersome.
My question is:
Is there a plan from apple 2026 so the users can converse with Siri and apps more naturally without having to say the name of the app? If so, is it already in Beta and can you point me towards it?
@available(iOS 17.0, *)
struct LogMealIntent: AppIntent {
static var title: LocalizedStringResource = "Log Meal"
static var description: LocalizedStringResource = "Log a meal"
func perform() async throws -> some IntentResult {
return .result()
}
}
@available(iOS 17.0, *)
struct LogMealShortcutsProvider: AppShortcutsProvider {
static var appShortcuts: [AppShortcut] {
AppShortcut(
intent: LogMealIntent(),
phrases: [
"Log chicken and rice for lunch in \(.applicationName)",
],
shortTitle: "Log meal",
systemImageName: "mic.fill"
)
}
}
When my Intents extension resolves an INStartCallIntent and returns .continueInApp while the device is locked, the call does not proceed unless the user unlocks the device. After unlocking, the app receives the NSUserActivity and CallKit proceeds normally.
My expectation is that the native CallKit outgoing UI should appear and the call should start without requiring unlock — especially when using AirPods, where attention is not available.
Steps to Reproduce
Pair and connect AirPods.
Lock the iPhone.
Start music playback (e.g. Apple Music).
Place the phone face down (or cover Face ID sensors so attention isn’t available).
Say: “Hey Siri, call Tommy with DiscoMonday(My app name).”
Observed Behavior
Music mutes briefly.
Siri says “Calling Tommy with DiscoMonday.”
Lock screen shows “Require Face ID / passcode.”
After several seconds, music resumes.
The app is not launched, no NSUserActivity is delivered, and no CXStartCallAction occurs.
With the phone face up, the same phrase launches the app, triggers CXStartCallAction, and the call proceeds via CallKit after faceID.
Expected Behavior
From the lock screen, Siri should hand off INStartCallIntent to the app, which immediately requests CXStartCallAction and drives the CallKit UI (reportOutgoingCall(...startedConnectingAt:) → ...connectedAt:), without requiring device unlock, regardless of orientation or attention availability when AirPods are connected.
Topic:
App & System Services
SubTopic:
Automation & Scripting
Tags:
iOS
Siri and Voice
Intents
CallKit
I'm a bit confused as to what we're supposed to be doing to support starting a workout using Siri in iOS/watchOS 26. On one hand, I see a big push to move towards App Intents and shortcuts rather than SiriKit. On the other hand, I see that some of the things I would expect to work with App Intents well... don't work. BUT - I'm also not sure it isn't just developer error on my part.
Here are some assertions that I'm hoping someone more skilled and knowledgable can correct me on:
Currently the StartWorkoutIntent only serves the Action button on the Watch Ultra. It cannot be used to register Shortcuts, nor does Siri respond to it.
I can use objects inherited from AppIntent to create shortcuts, but this requires an additional permission to run a shortcut if a user starts a workout with Siri.
AppIntent shortcuts requires the user to say "Start a workout in " - if the user leaves out the "in " part, Siri will not prompt the user to select my app.
If I want to allow the user to simply say "Start a Workout" and have Siri prompt the user for input as to which app it should use, I must currently use the older SiriKit to do so.
Are these assertions correct - or am I just implementing something incorrectly?
Using the latest Xcode 26 beta for what it is worth.
Topic:
App & System Services
SubTopic:
Health & Fitness
Tags:
Siri and Voice
SiriKit
Intents
App Intents
Hi, experts,
I want to find Siri response window by bundle id and use it for checking or printing, here is my example code:
XCUIDevice.shared.siriService.activate(voiceRecognitionText: "call mom")
let siriApp = XCUIApplication(bundleIdentifier: "***")
// Print out text from siriApp,
// expecte print: "Sorry, I can't make a phone call with your iphone."
Where should I put into ***?
I tried "com.apple.SiriViewService", "com.apple.siri.velocity", "com.apple.springboard' but nothing work
Any suggestion appreciated, thanks!
I'm implementing an App Intent for my iOS app that helps users plan trip activities. It only works when run as a shortcut but not using voice through Siri. There are 2 issues:
The ShortcutsTripEntity will only accept a voice input for a specific trip but not others.
I'm stuck with a throwing error when trying to use requestDisambiguation() on the activity day @Parameter property.
How do I rectify these issues.
This is blocking me from completing a critical feature that lets users quickly plan activities through Siri and Shortcuts.
Expected behavior for trip input: The intent should make Siri accept the spoken trip input from any of the options.
Actual behavior for trip input: Siri only accepts the same trip when spoken but accepts any when selected by click/touch.
Expected behavior for day input: Siri should accept the spoken selected option.
Actual behavior for day input: Siri only accepts an input by click/touch but yet throws an error at runtime I'm happy to provide more code. But here's the relevant code:
struct PlanActivityTestIntent: AppIntent {
@Parameter(title: "Activity Day")
var activityDay: ShortcutsItineraryDayEntity
@Parameter(
title: "Trip",
description: "The trip to plan an activity for",
default: ShortcutsTripEntity(id: UUID().uuidString, title: "Untitled trip"),
requestValueDialog: "Which trip would you like to add an activity to?"
)
var tripEntity: ShortcutsTripEntity
@Parameter(title: "Activity Title", description: "The title of the activity", requestValueDialog: "What do you want to do or see?")
var title: String
@Parameter(title: "Activity Day", description: "Activity Day", default: ShortcutsItineraryDayEntity(itineraryDay: .init(itineraryId: UUID(), date: .now), timeZoneIdentifier: "UTC"))
var activityDay: ShortcutsItineraryDayEntity
func perform() async throws -> some ProvidesDialog {
// ...other code...
let tripsStore = TripsStore()
// load trips and map them to entities
try? await tripsStore.getTrips()
let tripsAsEntities = tripsStore.trips.map { trip in
let id = trip.id ?? UUID()
let title = trip.title
return ShortcutsTripEntity(id: id.uuidString, title: title, trip: trip)
}
// Ask user to select a trip. This line would doesn't accept a voice // answer. Why?
let selectedTrip = try await $tripEntity.requestDisambiguation(
among: tripsAsEntities,
dialog: .init(
full: "Which of the \(tripsAsEntities.count) trip would you like to add an activity to?",
supporting: "Select a trip",
systemImageName: "safari.fill"
)
)
// This line throws an error
let selectedDay = try await $activityDay.requestDisambiguation(
among: daysAsEntities,
dialog:"Which day would you like to plan an activity for?"
)
}
}
Here are some related images that might help:
Hi,
I’m developing an app, which just like Clock App, uses multiple counters.
I want to speak Siri commands, such as “Siri, count for one hour”. ‘count’ is the alternative app name.
My AppIntent has a parameter, and Siri understands if I say “Siri, count” and asks for duration in a separate step. It runs fine, but I can’t figure out how to run the command with the duration specified upfront, without any subsequent questions from Siri.
Clock App has this functionality, so it can be done.
//title
//perform()
@Parameter(title: "Duration")
var minutes: Measurement<UnitDuration>
}
I have a struct ShortcutsProvider: AppShortcutsProvider, phrases accept only parameters of type AppEnum or AppEntity.
Topic:
App & System Services
SubTopic:
Automation & Scripting
Tags:
Siri and Voice
SiriKit
App Intents
Hi,
we're having trouble implementing search through Siri voice commands.
We already did it successfully for audio playback using INPlayMediaIntentHandling.
For search, none of the available ways works.
Both INSearchForMediaIntentHandling and ShowInAppSearchResultsIntent never open the App in the first place. We tried various commands, but e.g. "Search for " sometimes opens the Apple Music app and sometimes shows a Google search widget. Our app is never taken into consideration for providing any results.
We implemented all steps mentioned in WWDC videos and documentation (e.g. https://developer.apple.com/documentation/appintents/making-in-app-search-actions-available-to-siri-and-apple-intelligence), but nothing seems to work.
We're mainly testing on iOS 18 currently. Any idea why this is not working?
Topic:
App & System Services
SubTopic:
Automation & Scripting
Tags:
Siri and Voice
SiriKit
Intents
App Intents
I have an App Intent that conforms to ShowsSnippetView and returns a view that is shown in the Siri interface after the shortcut runs. The view simply consists of a VStack with a Text element, with no special styling. When my device is set to dark mode, the view doesn't adapt: the text is black, but the background of the Siri interface is a transparent dark gray, which makes the text almost unreadable. The text should be white in dark mode. The colorScheme environment value inside the view corresponds to light mode, even though the device is set to dark mode. This is most likely a bug in iOS.