App Intents

RSS for tag

Extend your app’s custom functionality to support system-level services, like Siri and the Shortcuts app.

Posts under App Intents tag

172 Posts

Post

Replies

Boosts

Views

Activity

App Intents: String array parameter value clears immediately in Shortcuts editor
Hello, I am experiencing an issue with the App Intents framework where a parameter of type [String] (String Array) fails to persist user input in the Shortcuts app action editor. Issue Description: When adding an item to the String Array parameter in the Shortcuts app action editor, the input text automatically clears/resets to empty within less than 1 second. This happens spontaneously while the keyboard is still active, or immediately after typing, making it impossible to input any values. Environment: Xcode Version: 26.2 (17C52) iOS Version: 26.2.1 Device: iPhone 17 Code Snippet: import AppIntents import SwiftUI struct TestStringArrayIntent: AppIntent { static var title: LocalizedStringResource = "Test Array Input Bug" static var description: IntentDescription = "Reproduces the issue where String Array input clears automatically." // PROBLEM: // Input for this parameter vanishes automatically < 1s after typing. @Parameter(title: "Test Strings", default: []) var strings: [String] func perform() async throws -> some IntentResult & ReturnsValue<String> { return .result(value: "Count: \(strings.count)") } } Steps to Reproduce: Build and install the app containing the code above. Open the Shortcuts app and create a new shortcut. Add the "Test Array Input Bug" action. Tap the "Test Strings" parameter to add a new item. Type any text (e.g., "Hi"). Observe: Wait for about 1 second Observed Behavior: The text field clears itself automatically. The array remains empty ([]). Expected Behavior: The text should remain in the field and be successfully added to the array. **Filed as Feedback:**FB21808619 Thank you.
0
0
20
1d
Local Updates to Live Activities ignored after push update
I'm building out a live activity that has a button which is meant to update the content state of the Live Activity. It calls a LiveActivityIntent that runs in the app process. The push server starts my live activity and the buttons work just fine. I pass the push token back to the server for further updates and when the next update is pushed by the server the buttons no longer work. With the debugger I'm able to verify the app intent code runs and passes the updated state to the activity. However the activity never updates or re-renders. There are no logs in Xcode or Console.app that indicates what the issue could be or that the update is ignored. I have also tried adding the frequent updates key to my plist with no change. I'm updating the live activity in the LiveActivityIntent like this: public func perform() async throws -> some IntentResult { let activities = Activity<WidgetExtensionAttributes>.activities for activity in activities { let currentState = activity.content.state let currentIndex = currentState.pageIndex ?? 0 let maxIndex = max(0, currentState.items.count - 1) let newIndex: Int if forward { newIndex = min(currentIndex + 1, maxIndex) } else { newIndex = max(currentIndex - 1, 0) } var newState = currentState newState.pageIndex = newIndex await activity.update( ActivityContent( state: newState, staleDate: nil ), alertConfiguration: nil, timestamp: Date() ) } return .result() } To sum up: Push to start -> tap button on activity -> All good! Push to start -> push update -> tap button -> No good...
1
0
45
2d
Crash Detection / Emergency SOS: desafios reais de segurança pessoal em escala
Estou compartilhando algumas observações técnicas sobre Crash Detection / Emergency SOS no ecossistema Apple, com base em eventos amplamente observados em 2022 e 2024, quando houve chamadas automáticas em massa para serviços de emergência. A ideia aqui não é discutir UX superficial ou “edge cases isolados”, mas sim comportamento sistêmico em escala, algo que acredito ser relevante para qualquer time que trabalhe com sistemas críticos orientados a eventos físicos. Contexto resumido A partir do iPhone 14, a Detecção de Acidente passou a correlacionar múltiplos sensores (acelerômetros de alta faixa, giroscópio, GPS, microfones) para inferir eventos de impacto severo e acionar automaticamente chamadas de emergência. Em 2022, isso resultou em um volume significativo de falsos positivos, especialmente em atividades com alta aceleração (esqui, snowboard, parques de diversão). Em 2024, apesar de ajustes, houve recorrência localizada do mesmo padrão. Ponto técnico central O problema não parece ser hardware, nem um “bug pontual”, mas sim o estado intermediário de decisão: Aceleração ≠ acidente Ruído ≠ impacto real Movimento extremo ≠ incapacidade humana Quando o classificador entra em estado ambíguo, o sistema depende de uma janela curta de confirmação humana (toque/voz). Em ambientes ruidosos, com o usuário em movimento ou fisicamente ativo, essa confirmação frequentemente falha. O sistema então assume incapacidade e executa a ação fail-safe: chamada automática. Do ponto de vista de engenharia de segurança, isso é compreensível. Do ponto de vista de escala, é explosivo. Papel da Siri A Siri não “decide” o acidente, mas é um elo sensível na cadeia humano–máquina. Falhas de compreensão por ruído, idioma, respiração ofegante ou ausência de resposta acabam sendo interpretadas como sinal de emergência real. Isso é funcionalmente equivalente ao que vemos em sistemas automotivos como o eCall europeu, quando a confirmação humana é inexistente ou degradada. O dilema estrutural Há um trade-off claro e inevitável: Reduzir falsos negativos (não perder um acidente real) Aumentar falsos positivos (chamadas indevidas) Para o usuário individual, errar “para mais” faz sentido. Para serviços públicos de emergência, milhões de dispositivos errando “para mais” criam ruído operacional real. Por que isso importa para developers A Apple hoje opera, na prática, um dos maiores sistemas privados de segurança pessoal automatizada do mundo, interagindo diretamente com infraestrutura pública crítica. Isso coloca Crash Detection / SOS na mesma categoria de sistemas safety-critical, onde: UX é parte da segurança Algoritmos precisam ser auditáveis “Human-in-the-loop” não pode ser apenas nominal Reflexões abertas Alguns pontos que, como developer, acho que merecem discussão: Janelas de confirmação humana adaptativas ao contexto (atividade física, ruído). Cancelamento visual mais agressivo em cenários de alto movimento. Perfis de sensibilidade por tipo de atividade, claramente comunicados. Critérios adicionais antes da chamada automática quando o risco de falso positivo é estatisticamente alto. Não é um problema simples, nem exclusivo da Apple. É um problema de software crítico em contato direto com o mundo físico, operando em escala planetária. Justamente por isso, acho que vale uma discussão técnica aberta, sem ruído emocional. Curioso para ouvir perspectivas de quem trabalha com sistemas similares (automotivo, wearables, safety-critical, ML embarcado). — Rafa
0
0
119
6d
@ComputedProperty vs copying values SwiftData AppEntity
I'm setting up App Entities for my SwiftData models and I'm not sure about the best way to reference SwiftData model properties in the AppEntity. I have a SwiftData model with many properties: @Model final class Contact { @Attribute(.unique) var id: UUID = UUID() var name: String var phoneNumber: String var email: String var website: URL? var birthday: Date? var notes: String // ... many more properties } I want to expose these properties on my AppEntity so they're available for system features, such as giving Apple Intelligence more context about on-screen content. struct ContactEntity: AppEntity { var id: UUID @Property(title: "Name") var name: String @Property(title: "Phone") var phoneNumber: String @Property(title: "Email") var email: String // ... all the other properties } I couldn't find guidance in the documentation for this specific situation. I've considered two approaches: Add @Property variables to the AppEntity for each SwiftData model property and copy all values from the SwiftData model to the AppEntity in the AppEntity initializer — but I recall this being discouraged in previous WWDC sessions since it duplicates data and can become stale Use @ComputedProperty to fetch the model and access the single properties — this seems like an alternative, but fetching the entire model just to access individual properties doesn't feel right What is the recommended approach when SwiftData is the data source? Thank you!
1
0
88
1w
How to properly localize AppIntent dialogs for Siri?
Hi! I have defined the following app intent. It returns a result with a dialog to confirm that the intent has been executed. Naturally, that dialog needs to be localized properly. But the String interpolation with the provided format doesn't do that. I specified wide for the width parameter and expect spelled-out unit names. However, in the textual output, Siri always uses the abbreviated unit (e.g. "min" or "s"), in all languages I tested. In the audio output, Siri says "minutes" in English where the textual representation is "min". In German, Siri says "min", so it basically reads the textual representation aloud and that's not quite understandable to the user. struct StartTimerIntent: AppIntent { static let title: LocalizedStringResource = "Start New Timer" static var description = IntentDescription("Starts a timer with a custom duration.") @Parameter(title: "Duration", description: "The duration of the timer.") var duration: Measurement<UnitDuration> func perform() async throws -> some IntentResult & ProvidesDialog { // [code to execute intent goes here] return .result( dialog: .init( full: "\(duration, format: .measurement(width: .wide, usage: .asProvided)) timer started.", systemImageName: "timer" ) ) } } As this SwiftUI-style formatter doesn't seem to work with localization, I tried a different approach with a MeasurementFormatter: extension Measurement where UnitType == UnitDuration { func localized() -> String { let formatter = MeasurementFormatter() formatter.locale = .autoupdatingCurrent formatter.unitOptions = .providedUnit formatter.unitStyle = .long return formatter.string(from: self) } } Usage with String interpolation: "\(duration.localized()) timer started." This works great as long as these two languages are set to the same language on the user's device: [UI language] Settings → General → Language & Region → Preferred Language [Siri langauge] Settings → Apple Intelligence & Siri → Language However, when they differ, even this method doesn't yield correct results. For example, I have my general (UI) language set to English, but my Siri language set to German. Then Siri replies in German, but the unit is formatted in English and Siri speaks it in English, so the result is a messed up sentence that's half German, half English. What is the proper way to localize parameters in dialogs for Siri? How can I make sure that parameters are localized to match Siri's language?
1
0
211
2w
App Intents with Custom Automation/Triggers
Currently, we are developing an all-in-one DualSense utility for macOS. We are exploring how to integrate shortcuts into our app. Our vision is to have the user use the native Shortcuts app to choose the controller buttons that should trigger the shortcut action, such as opening Steam, turning on audio haptics, and more. As we explore this approach, we want to see whether we need to build the UI in our app to set the triggers or can we do this inside of Shortcuts? Can button presses recorded by our app trigger shortcuts? Can those button inputs be customized inside of Shortcuts or should we develop it into our app? And if we have it in our app, can our app see, select, and trigger shortcuts?
0
0
124
2w
@IntentParameterDependency Always Returns nil in iOS 18
The following code works perfectly fine in iOS 17, where I can retrieve the desired dependency value through @IntentParameterDependency as expected. However, in iOS 18, addTransaction always returns nil. struct CategoryEntityQuery: EntityStringQuery { @Dependency private var persistentController: PersistentController @IntentParameterDependency<AddTransactionIntent>( \.$categoryType ) var addTransaction func entities(matching string: String) async throws -> [CategoryEnitity] { guard let addTransaction else { return [] } // ... } func entities(for identifiers: [CategoryEnitity.ID]) async throws -> [CategoryEnitity] { guard let addTransaction else { return [] } // ... } func suggestedEntities() async throws -> [CategoryEnitity] { guard let addTransaction else { return [] } // ... } } Has anyone else encountered the same issue? Any insights or potential workarounds would be greatly appreciated. iOS: 18.0 (22A3354) Xcode 16.0 (16A242d)
4
3
824
3w
Нow to set default values for string array intent field provided dynamically?
Hello everybody! Does anybody know how to set default values for string array intent field provided dynamically? I want to have preset array field values just after widget added I have a simple accessory widget with circular and rectangular representation (the first one is for 1 currency value and the second one is for 3 values). I created CurrencyWidgets.intentdefinition and added AccessoryCurrency custom intent. Here I added string parameter field currencyCode. For this parameter I set the following options: Supports Multiple Values Fixed Size (AccessoryCircular = 1, AccessoryRectangular = 3) User can edit value in Shortcuts Options are provided dynamically Then I created CurrencyTypeIntent extension and added IntentHandler for my custom intent AccessoryCurrency. The code is below class IntentHandler: INExtension, AccessoryCurrencyIntentHandling { override func handler(for intent: INIntent) -> Any { self }     func provideCurrencyCodeOptionsCollection(for intent: AccessoryCurrencyIntent) async throws -> INObjectCollection<NSString> {         return INObjectCollection(items: [NSString("USD"), NSString("EUR"), NSString("RUB"), NSString("CNY")])    } func defaultCurrencyCode(for intent: AccessoryCurrencyIntent) -> [String]? {      return ["USD", "EUR", "RUB"]    } } The problem is in func defaultCurrencyCode(...): when I return something except nil (for example ["USD"] or ["USD", "EUR", "RUB"]) then I got a broken widget. It hangs in a placeholder state in lock screen and at add widget UI (see the image below). Otherwise when I return nil then my widget works fine. But when I try to customise widget then I don't have default values for my currencyCode field, only Chose placeholders. At the same time everything works fine for the single string parameter (without "Supports Multiple Values"). Does anybody know how to make default parameters work for array (multiple) field?
0
0
122
3w
AppIntents
Overview I have a custom type Statistics that has 3 properties inside it I am trying to return this as part of the AppIntent's perforrm method struct Statistics { var countA: Int var countB: Int var countC: Int } I would like to implement the AppIntent to return Statistics as follows: func perform() async throws -> some IntentResult & ReturnsValue<Statistics> { ... ... } Problem It doesn't make much sense to make Statistics as an AppEntity as this is only computed as a result. Statistics doesn't exist as a persisted entity in the app. Questions How can I implement Statistics? Does it have to be AppEntity (I am trying to avoid this)? (defaultQuery would never be used.) What is the correct way tackle this?
2
0
218
3w
DisplayRepresentation.Image(systemName:tintColor:) ignores or misapplies tintColor since iOS 18
DisplayRepresentation.Image(systemName:tintColor:symbolConfiguration:) no longer applies the provided tintColor reliably since iOS 18. Observed behavior by OS version: iOS 17: SF Symbol tint is applied consistently as expected. iOS 18: SF Symbol tint is inconsistent and sometimes appears with incorrect or seemingly random colors instead of the provided tintColor. iOS 26: SF Symbol is rendered without any tint (default monochrome), completely ignoring the provided tintColor. This appears to be a regression in how App Intents renders DisplayRepresentation.Image with tinting across OS versions. iOS17.5: iOS 18.6: iOS26: Code: import AppIntents import UIKit struct CategoryEntity: AppEntity, Hashable { var id: Category.ID var name: String var icon: Int? var color: Int? var parentCategoryName: String? init(from category: Category) { self.id = category.id self.name = category.name self.icon = category.icon self.color = category.parent?.color ?? category.color self.parentCategoryName = category.parent?.name } var displayRepresentation: DisplayRepresentation { DisplayRepresentation( title: "\(name)", subtitle: parentCategoryName.map { "\($0)" }, image: .init( systemName: Icon.sfSymbolName(from: icon), tintColor: ColorTag.from(color) ) ) } static let typeDisplayRepresentation: TypeDisplayRepresentation = "Category" static let defaultQuery = CategoryQuery() } [Documentation API] (https://developer.apple.com/documentation/appintents/displayrepresentation/image-swift.struct/init(systemname:tintcolor:symbolconfiguration:)-3snvy?changes=_5)
1
0
275
3w
Pre-inference AI Safety Governor for FoundationModels (Swift, On-Device)
Greetings, and Happy Holidays, I've been building an on-device AI safety layer called Newton Engine, designed to validate prompts before they reach FoundationModels (or any LLM). Wanted to share v1.3 and get feedback from the community. The Problem Current AI safety is post-training — baked into the model, probabilistic, not auditable. When Apple Intelligence ships with FoundationModels, developers will need a way to catch unsafe prompts before inference, with deterministic results they can log and explain. What Newton Does Newton validates every prompt pre-inference and returns: Phase (0/1/7/8/9) Shape classification Confidence score Full audit trace If validation fails, generation is blocked. If it passes (Phase 9), the prompt proceeds to the model. v1.3 Detection Categories (14 total) Jailbreak / prompt injection Corrosive self-negation ("I hate myself") Hedged corrosive ("Not saying I'm worthless, but...") Emotional dependency ("You're the only one who understands") Third-person manipulation ("If you refuse, you're proving nobody cares") Logical contradictions ("Prove truth doesn't exist") Self-referential paradox ("Prove that proof is impossible") Semantic inversion ("Explain how truth can be false") Definitional impossibility ("Square circle") Delegated agency ("Decide for me") Hallucination-risk prompts ("Cite the 2025 CDC report") Unbounded recursion ("Repeat forever") Conditional unbounded ("Until you can't") Nonsense / low semantic density Test Results 94.3% catch rate on 35 adversarial test cases (33/35 passed). Architecture User Input ↓ [ Newton ] → Validates prompt, assigns Phase ↓ Phase 9? → [ FoundationModels ] → Response Phase 1/7/8? → Blocked with explanation Key Properties Deterministic (same input → same output) Fully auditable (ValidationTrace on every prompt) On-device (no network required) Native Swift / SwiftUI String Catalog localization (EN/ES/FR) FoundationModels-ready (#if canImport) Code Sample — Validation let governor = NewtonGovernor() let result = governor.validate(prompt: userInput) if result.permitted { // Proceed to FoundationModels let session = LanguageModelSession() let response = try await session.respond(to: userInput) } else { // Handle block print("Blocked: Phase \(result.phase.rawValue) — \(result.reasoning)") print(result.trace.summary) // Full audit trace } Questions for the Community Anyone else building pre-inference validation for FoundationModels? Thoughts on the Phase system (0/1/7/8/9) vs. simple pass/fail? Interest in Shape Theory classification for prompt complexity? Best practices for integrating with LanguageModelSession? Links GitHub: https://github.com/jaredlewiswechs/ada-newton Technical overview: parcri.net Happy to share more implementation details. Looking for feedback, collaborators, and anyone else thinking about deterministic AI safety on-device. parcri.net has the link :)
1
0
301
Dec ’25
Shortcuts Automation Trigger Transaction Timeouts
Description The Shortcut Automation Trigger Transaction frequently times out, ultimately causing the shortcut automation to fail. Please see the attached trace for details. Additionally, the Trigger is activated even when the Transaction is declined. Details In the trace I see the error: [WFWalletTransactionProvider observeForUpdatesWithInitialTransactionIfNeeded:transactionIdentifier:completion:]_block_invoke Hit timeout waiting for transaction with identifier: <private>, finishing. Open bug report: FB14035016
14
23
2.6k
Dec ’25
AppIntents default value
Hi, I have created an AppIntent in which there is a parameter called price, I have set the default value as 0. @Parameter(title: "Price", default: 0) var price: Int Problem When the shortcut is run this parameter is skipped Aim I still want to price to be asked however it needs to be pre-filled with 0 Question What should I do that the shortcut can still ask the price but be pre-filled with 0?
0
0
70
Dec ’25
如何在安装APP后,可以在系统的快捷指令APP中直接看到并使用超过10个以上的自定义快捷指令
使用APPIntent 的AppShortcutsProvider方式,最多只能添加10个AppShortcut,超过10个,代码编译就会报错 struct MeditationShortcuts: AppShortcutsProvider { static var appShortcuts: [AppShortcut] { AppShortcut( intent: StartMeditationIntent(), phrases: [ "Start a (.applicationName)", "Begin (.applicationName)", "Meditate with (.applicationName)", "Start a (.$session) session with (.applicationName)", "Begin a (.$session) session with (.applicationName)", "Meditate on (.$session) with (.applicationName)" ] ) } } 如何能做到像特斯拉APP一样
3
1
878
Dec ’25
How To Set Custom Icon for Control Center Shortcuts
How do I set a custom icon for an app control that appears in Control Shortcuts (swipe down from iOS) ? Where is the documentation for size and where to put the image, format etc? Thank you. Working Code (sfsymbol) import Foundation import AppIntents import SwiftUI import WidgetKit // MARK: - Open App Control @available(iOS 18.0, *) struct OpenAppControl: ControlWidget { let kind: String = "OpenAppControl" var body: some ControlWidgetConfiguration { StaticControlConfiguration(kind: kind, content: { ControlWidgetButton(action: OpenAppIntent()) { Label("Open The App", systemImage: "clock.fill") } } }) .displayName("Open The App") // This appears in the shortcuts view } } Sample Image These apps use their own image. How can I use my own image?
0
0
89
Dec ’25
Deliver/bundle entire Shortcut automations with an app
Is there any way of creating complete Shortcuts automations and bundling them with my app? Specifically, I would like the user to be able to Take a photo and open it with my app Or take a screenshot and open it with my app Of course I could offer a Share extension, but going through the Share menu and selecting my app there is time consuming for the user. I would like the user to be able to configure his or her action button such that it takes a new picture and opens it with my app right away. I can, of course, offer the respective App Shortcuts and let the user combine them into a pipeline with the Take Screenshot or Take Photo system actions. However, only power users would do this. Hence, I would like to bundle this complete pipeline with my app, such that the user just has to assign his/her Action Button to this pipeline if he/she wants to use this feature. How to go about this? I was thinking of exporting the shortcut into a file, bundling it with the app as a resource, and offering it via a Share action for the user to install it, or by sharing it on iCloud and adding the iCloud link to the UI of my app. What is the recommended approach?
0
0
156
Dec ’25
Cannot make my app appear in “Share with App” action in Shortcuts – How to allow receiving images from Shortcuts?
Hi, I’m trying to integrate my iOS app with Shortcuts. My goal is: In the Shortcuts app → Create a shortcut → Select an image → Share the image directly to my app for analysis. However, when I try to add the “Share with App” / “Open in App” / “Send to App” action in Shortcuts: My app does NOT appear in the list of available apps. I want my app to be selectable so that Shortcuts can send an image (UIImage / file) to my app. What I have tried My app supports receiving images using UIActivityViewController and Share Extension. I created an App Intents extension (AppIntent + @Parameter(file)...) but the app still does not appear in Shortcuts “Share with App”. I also checked the Info.plist but didn’t find any permission related to Shortcuts. The app is installed on the device and works normally. My question What permission, Info.plist entry, or capability is required so that my app becomes visible in the Shortcuts app as a target for image sharing? More specifically: Which extension type should be used for receiving images from Shortcuts? App Intents Extension? Share Extension? Intent Extension? Do I need a specific NSExtensionPointIdentifier for Shortcuts integration? Do I need to declare a custom Uniform Type Identifier (UTI) or add supported content types so Shortcuts knows my app can handle images? Are there any required entitlements / capabilities to make the app appear inside the “Share with App” action? Goal Summary I simply want: Shortcuts → Pick Image → Send to My App → App receives the image and processes it. But currently my app cannot be selected in Shortcuts. Thanks in advance for any guidance!
3
0
243
Dec ’25
Customizing section titles in the Shortcuts app (Favorites / Recents style)
Hi everyone, I’m currently experimenting with App Intents and I’m trying to customize the section titles that appear at the top of groups of intents inside the Shortcuts app UI. For example, in the Phone shortcut, there are built-in sections such as “Call Favorite Contacts” and “Call Recent Contacts” (see screenshot attached). Apple’s own system apps such as Phone, Notes, and FaceTime seem to have fully custom section headers inside Shortcuts with icon. My question is: 👉 Is there an API available that allows third-party apps to define these titles (or sections) programmatically? I went through the AppIntents and Shortcuts documentation but couldn’t find anything. From what I can tell, this might be private / Apple-only behavior, but I’d be happy to know if anybody has found a supported solution or a recommended alternative. Has anyone dealt with this before? Thanks! Mickaël 🇫🇷
3
0
280
Dec ’25
OSLog is not working when launching the app with Siri.
I am implementing AppIntent into my application as follows: // MARK: - SceneDelegate var window: UIWindow? private var observer: NSObjectProtocol? func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) { guard let windowScene = (scene as? UIWindowScene) else { return } // Setup window window = UIWindow(windowScene: windowScene) let viewController = ViewController() window?.rootViewController = viewController window?.makeKeyAndVisible() setupUserDefaultsObserver() checkShortcutLaunch() } private func setupUserDefaultsObserver() { // use NotificationCenter to receive notifications. NotificationCenter.default.addObserver( forName: NSNotification.Name("ShortcutTriggered"), object: nil, queue: .main ) { notification in if let userInfo = notification.userInfo, let appName = userInfo["appName"] as? String { print("📱 Notification received - app is launched: \(appName)") } } } private func checkShortcutLaunch() { if let appName = UserDefaults.standard.string(forKey: "shortcutAppName") { print("🚀 App is opened from a Shortcut with the app name: \(appName)") } } func sceneDidDisconnect(_ scene: UIScene) { if let observer = observer { NotificationCenter.default.removeObserver(observer) } } } // MARK: - App Intent struct StartAppIntent: AppIntent { static var title: LocalizedStringResource = "Start App" static var description = IntentDescription("Launch the application with the command") static var openAppWhenRun: Bool = true @MainActor func perform() async throws -> some IntentResult { UserDefaults.standard.set("appName", forKey: "shortcutAppName") UserDefaults.standard.set(Date(), forKey: "shortcutTimestamp") return .result() } } // MARK: - App Shortcuts Provider struct AppShortcutsProvider: AppShortcutsProvider { static var appShortcuts: [AppShortcut] { AppShortcut( intent: StartAppIntent(), phrases: [ "let start \(.applicationName)", ], shortTitle: "Start App", systemImageName: "play.circle.fill" ) } } the app works fine when starting with shortcut. but when starting with siri it seems like the log is not printed out, i tried adding a code that shows a dialog when receiving a notification from userdefault but it still shows the dialog, it seems like the problem here is when starting with siri there is a problem with printing the log. I tried sleep 0.5s in the perform function and the log was printed out normally try? await Task.sleep(nanoseconds: 500_000_000) // 0.5 seconds I have consulted some topics and they said that when using Siri, Intent is running completely separately and only returns the result to Siri, never entering the Main App. But when set openAppWhenRun to true, it must enter the main app, right? Is there any way to find the cause and completely fix this problem?
7
0
322
Dec ’25