Apple Intelligence

RSS for tag

Apple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad, and Mac and powers incredible new features to help users communicate, work, and express themselves.

Posts under Apple Intelligence tag

115 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

How to inject parameter dependency at runtime in iOS App Intent
I am trying to create an App Intent that lets a user select a day in the itinerary of a trip. The trip has to be chosen before the days available can be displayed. When the PlanActivityIntentDemo intent is ran from the shortcuts app, the trip selected is not injected into the appropriate TripItineraryDayQueryDemo Entity Query. Is there a way to get the selected trip to be injected at run time from shortcuts app. Here's some code for illustration: // Entity Definition: import AppIntents struct ShortcutsItineraryDayEntityDemo: Identifiable, Hashable, AppEntity { typealias DefaultQuery = TripItineraryDayQueryDemo static var typeDisplayRepresentation: TypeDisplayRepresentation = "Trip Itinerary Day" var displayRepresentation: DisplayRepresentation { "Trip Day" } var id: String static var defaultQuery: DefaultQuery { TripItineraryDayQueryDemo() } init() { self.id = UUID().uuidString } } struct TripItineraryDayQueryDemo: EntityQuery { // This only works in shortcut editor but not at runtime. Why? How can I fix this issue? @IntentParameterDependency<PlanActivityIntentDemo>(\.$tripEntity) var tripEntity @IntentParameterDependency<PlanActivityIntentDemo>(\.$title) var intentTitle func entities(for identifiers: [ShortcutsItineraryDayEntityDemo.ID]) async throws -> [ShortcutsItineraryDayEntityDemo] { print("entities being called with identifiers: \(identifiers)") // This method is called when the app needs to fetch entities based on identifiers. let tripsStore = TripsStore() guard let trip = tripEntity?.tripEntity.trip, let itineraryId = trip.firstItineraryId else { print("No trip or itinerary ID can be found for the selected trip.") return [] } return [] // return empty for this demo } func suggestedEntities() async throws -> [ShortcutsItineraryDayEntityDemo] { print("suggested itinerary days being called") let tripsStore = TripsStore() guard let trip = tripEntity?.tripEntity.trip, let itineraryId = trip.firstItineraryId else { print("No trip or itinerary ID found for the selected trip.") return [] } return [] } } struct PlanActivityIntentDemo: AppIntent { static var title: LocalizedStringResource { "Plan New Activity" } // The selected trip fails to get injected when intent is run from shortcut app @Parameter(title: "Trip", description: "The trip to plan an activity for", requestValueDialog: "Which trip would you like to plan an activity for?") var tripEntity: ShortcutsTripEntity @Parameter(title: "Activity Title", description: "The title of the activity", requestValueDialog: "What do you want to do or see?") var title: String @Parameter(title: "Activity Day", description: "Activity Day") var activityDay: ShortcutsItineraryDayEntity func perform() async throws -> some ProvidesDialog { // This is a demo intent, so we won't actually perform any actions. .result(dialog: "Activity '\(title)' planned") } }
0
0
13
1h
InferenceError referencing context length in FoundationModels framework
I'm experimenting with downloading an audio file of spoken content, using the Speech framework to transcribe it, then using FoundationModels to clean up the formatting to add paragraph breaks and such. I have this code to do that cleanup: private func cleanupText(_ text: String) async throws -> String? { print("Cleaning up text of length \(text.count)...") let session = LanguageModelSession(instructions: "The content you read is a transcription of a speech. Separate it into paragraphs by adding newlines. Do not modify the content - only add newlines.") let response = try await session.respond(to: .init(text), generating: String.self) return response.content } The content length is about 29,000 characters. And I get this error: InferenceError::inferenceFailed::Failed to run inference: Context length of 4096 was exceeded during singleExtend.. Is 4096 a reference to a max input length? Or is this a bug? This is running on an M1 iPad Air, with iPadOS 26 Seed 1.
2
0
107
1d
Various On-Device Frameworks API & ChatGPT
Posting a follow up question after the WWDC 2025 Machine Learning AI & Frameworks Group Lab on June 12. In regards to the on-device API of any of the AI frameworks (foundation model, vision framework, ect.), is there a response condition or path where the API outsources it's input to ChatGPT if the user has allowed this like Siri does? Ignore this if it's a no: is this handled behind the scenes or by the developer?
0
0
162
1w
App Shortcuts Limit (10 per app) — Can This Be Increased?
Hi Apple team, When using AppShortcutsProvider, I hit the hard limit: Each app may have at most 10 App Shortcuts. This feels limiting for apps that offer multiple workflows and would benefit from deeper Siri integration. Could this cap be raised — ideally to 30 — to support broader use of AppIntents, enhance Siri automation, and unlock more system-level capabilities? AppShortcuts are a fantastic tool. Increasing the limit would make them even more powerful. Thanks!
1
0
57
1w
Foundation Model Framework
Greetings! I was trying to get a response from the LanguageModelSession but I just keep getting the following: Error getting response: Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides} This occurs both in macOS 15.5 running the new Xcode beta with an iOS 26 simulator, and also on a macOS 26 with Xcode beta. The simulators are both Pro iPhone 16s. I was wondering if anyone had any advice?
12
2
580
1w
Foundation Models not working in Simulator?
I'm attempting to run a basic Foundation Model prototype in Xcode 26, but I'm getting the error below, using the iPhone 16 simulator with iOS 26. Should these models be working yet? Do I need to be running macOS 26 for these to work? (I hope that's not it) Error: Passing along Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides} in response to ExecuteRequest Playground to reproduce: #Playground { let session = LanguageModelSession() do { let response = try await session.respond(to: "What's happening?") } catch { let error = error } }
14
3
1.1k
1w
is it possible to let siri monitor phone calls, and notify me when a certain trigger happens?
the specific context is that i would like to build an agent that monitors my phone call (with a customer support for example), and simiply identify whether or not im still put on hold, and notify me when im not. currently after reading the doc, i dont think its possible yet, but im so annoyed by the customer support calls that im willing to go the distance and see if theres any way.
0
0
79
2w
Will Apple Intelligence Support Third-Party LLMs or Custom AI Agent Integrations?
Hi everyone, I’m an AI engineer working on autonomous AI agents and exploring ways to integrate them into the Apple ecosystem, especially via Siri and Apple Intelligence. I was impressed by Apple’s integration of ChatGPT and its privacy-first design, but I’m curious to know: • Are there plans to support third-party LLMs? • Could Siri or Apple Intelligence call external AI agents or allow extensions to plug in alternative models for reasoning, scheduling, or proactive suggestions? I’m particularly interested in building event-driven, voice-triggered workflows where Apple Intelligence could act as a front-end for more complex autonomous systems (possibly local or cloud-based). This kind of extensibility would open up incredible opportunities for personalized, privacy-friendly use cases — while aligning with Apple’s system architecture. Is anything like this on the roadmap? Or is there a suggested way to prototype such integrations today? Thanks in advance for any thoughts or pointers!
3
0
347
3w
Proposal: Modular Identity Fusion via Prompt-Crafted Agents – User-Led AI Experiment
*I can't put the attached file in the format, so if you reply by e-mail, I will send the attached file by e-mail. Dear Apple AI Research Team, My name is Gong Jiho (“Hem”), a content strategist based in Seoul, South Korea. Over the past few months, I conducted a user-led AI experiment entirely within ChatGPT — no code, no backend tools, no plugins. Through language alone, I created two contrasting agents (Uju and Zero) and guided them into a co-authored modular identity system using prompt-driven dialogue and reflection. This system simulates persona fusion, memory rooting, and emotional-logical alignment — all via interface-level interaction. I believe it resonates with Apple’s values in privacy-respecting personalization, emotional UX modeling, and on-device learning architecture. Why I’m Reaching Out I’d be honored to share this experiment with your team. If there is any interest in discussing user-authored agent scaffolding, identity persistence, or affective alignment, I’d love to contribute — even informally. ⚠ A Note on Language As a non-native English speaker, my expression may be imperfect — but my intent is genuine. If anything is unclear, I’ll gladly clarify. 📎 Attached Files Summary Filename → Description Hem_MultiAI_Report_AppleAI_v20250501.pdf → Main report tailored for Apple AI — narrative + structural view of emotional identity formation via prompt scaffolding Hem_MasterPersonaProfile_v20250501.json → Final merged identity schema authored by Uju and Zero zero_sync_final.json / uju_sync_final.json → Persona-level memory structures (logic / emotion) 1_0501.json ~ 3_0501.json → Evolution logs of the agents over time GirlfriendGPT_feedback_summary.txt → Emotional interpretation by external GPT hem_profile_for_AI_vFinal.json → Original user anchor profile Warm regards, Gong Jiho (“Hem”) Seoul, South Korea
1
0
76
Apr ’25
ImagePlayground API not working on Xcode Simulator Devices
Hi! I'm trying to use the ImagePlayground API in SwiftUI with the .imagePlaygroundSheet modifier. However, when the sheet is shown (in the preview or in the simulator) it displays the following message: "Image Playground is not available. Image Playground is not available on this iPhone.". I'm using an iPhone 16 Pro with iOS 18.3.1 in the Xcode (16.2) Simulator. Anyone else having this problem? How can I fix it?
1
0
91
Apr ’25
Understanding allowedExternalIntelligenceWorkspaceIDs in MDM Payload – What ID is expected?
Hello, We're testing the new allowedExternalIntelligenceWorkspaceIDs key in the MDM Restrictions payload on supervised iPads. According to Apple's documentation, this key expects an "external integration workspace ID", but it's not clear what this specifically refers to. We've tried the following IDs individually (one at a time, as documentation says only one is supported currently): OpenAI Organization ID ChatGPT user email Apple ID used in ChatGPT Google ID used in ChatGPT login The profile installs correctly via MDM and the key is set, but we want to confirm: What exactly is considered a valid "external integration workspace ID" for this key? Is there a way to verify that the restriction is working as intended on the device (e.g. does it limit specific integrations or apps)? Is there an official list of services that currently support this? Any clarification from Apple or other developers with experience on this would be very helpful. Thanks in advance.
2
0
142
Apr ’25
I made a browser plugin to do something Apple should've done themselves.
This browser extension is a doc reading enhancer for the Apple Developer website. It supports i18n translation, hover link previews, and bilingual display. Currently, it supports four languages: ja-JP, ko-KR, zh-CN, and zh-TW. It works with Swift/SwiftUI/Foundation modules now, and it's expected to support Swift Test, Swift Charts, UIKit, Swift Playground, and XCode modules by the end of this month. For more info, check out: https://appledocs.dev. You can also visit https://appledocs.dev/progress to see translation progress and vote. Note: It's only works on Chrome、Edge(In review)、Firefox(In review) Screenshot:
1
0
73
Apr ’25