No matter what, the LanguageModelSession always returns very lengthy / verbose responses. I set the maximumResponseTokens option to various small numbers but it doesn't appear to have any effect. I've even used this instructions format to keep responses between 3-8 words but it returns multiple paragraphs. Is there a way to manage LLM response length? Thanks.
Foundation Models
RSS for tagDiscuss the Foundation Models framework which provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am calling into an app extension from a Safari Web Extension (sendNativeMessage, which in turn results in a call to NSExtensionRequestHandling’s beginRequest). My Safari extension aims to make use of the new foundation models for some of the features it provides.
In my testing, I hit the rate limit by sending 4 requests, waiting 30 seconds between each. This makes the FoundationModels framework (which would otherwise serve my use case perfectly well) unusable in this context, because the model is called in response to user input, and this rate of user input is perfectly plausible in a real world scenario.
The error thrown as a result of the rate limit is “Safety guardrail was triggered after consecutive failures during streaming.", but looking at the system logs in Console.app shows the rate limit as the real culprit.
My suggestions:
Please introduce sensible rate limits for app extensions, through an entitlement if need be. If it is rate limited to 1 request per every couple of seconds, that would already fix the issue for me.
Please document the rate limit.
Please make the thrown error reflect that it is the result of a rate limit and not a generic guardrail violation. IMPORTANT: please indicate in the thrown error when it is safe to try again.
Filed a feedback here: FB18332004
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
It seems like there was an undocumented change that made Transcript.init(entries: [Transcript.Entry] initializer private, which broke my application, which relies on (manual) reconstruction of Transcript entries.
Worked fine on beta 1, on beta 2 there's this error
dyld[72381]: Symbol not found: _$s16FoundationModels10TranscriptV7entriesACSayAC5EntryOG_tcfC
Referenced from: <44342398-591C-3850-9889-87C9458E1440> /Users/mika/experiments/apple-on-device-ai/fm
Expected in: <66A793F6-CB22-3D1D-A560-D1BD5B109B0D> /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels
Is this a part of an API transition, if so -
Apple, please update your documentation
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I was able to open a new project and play around with the Foundation Model, but when I dropped this class in a production app (with a lot of files) I'm running into Safety Guardrail errors for this very small prompt. Specifically it's "Safety guardrail was triggered after consecutive failures during streaming." Does it have something to do with the size of the app? I don't know what else to try to get it to work?
import FoundationModels
import Playgrounds
@available(iOS 26.0, *)
#Playground {
Task {
do {
let session = LanguageModelSession()
let prompt = "Write a short story about a talking cat."
let response = try await session.respond(to: prompt)
print(response)
} catch {
print("Error: \(error)")
}
}
}
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello Team,
I'm currently working on a proof of concept using Apple's Foundation Model for a RAG-based chat system on my MacBook Pro with the M1 Max chip.
Environment details:
macOS: 26.0 Beta
Xcode: 26.0 beta 2 (17A5241o)
Target platform: iPad (as the iPhone simulator does not support Foundation models)
While testing, even with very small input prompts to the LLM, I intermittently encounter the following error:
InferenceError::inference-Failed::Failed to run inference: Context length of 4096 was exceeded during singleExtend.
Has anyone else experienced this issue? Are there known limitations or workarounds for context length handling in this setup?
Any insights would be appreciated.
Thank you!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I have been using "apple" to test foundation models.
I thought this is local, but today the answer changed - half way through explanation, suddenly guardrailViolation error was activated! And yesterday, all reference to "Apple II", "Apple III" now refers me to consult apple.com!
Does foundation models connect to Internet for answer? Using beta 3.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
When using Foundation Models, is it possible to ask the model to produce output in a specific language, apart from giving an instruction like "Provide answers in ." ? (I tried that and it kind of worked, but it seems fragile.)
I haven't noticed an API to do so and have a use-case where the output should be in a user-selectable language that is not the current system language.
I'm working on a to-do list app that uses SpeechTranscriber and Foundation Models framework to transcribe a user's voice into text and create to-do items based off of it.
After about 30 minutes looking at my code, I couldn't figure out why I was failing to generate a to-do for "I need to go to Six Flags Great America tomorrow at 3pm." It turns out, I was consistently firing the Foundation Models's safety filter violation for unsafe content ("May contain unsafe content").
Lesson learned: consider comprehensively logging Foundation Models error states to quickly identify when safety filters are unexpectedly triggered.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I have a mac (M4, MacBook Pro) running Tahoe 26.0 beta. I am running Xcode beta.
I can run code that uses the LLM in a #Preview { }.
But when I try to run the same code in the simulator, I get the 'device not ready' error and I see the following in the Settings app.
Is there anything I can do to get the simulator to past this point and allowing me to test on it with Apple's LLM?
Hey,
Would be great to have an equivalent of toolCallId for both toolCall and toolResult in the transcript. Otherwise, it is hard to connect tool calls with their respective responses, when there were multiple parallel calls to the same tool.
Thanks!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
My sample app has been working with the following code:
func call(arguments: Arguments) async throws -> ToolOutput {
var temp:Int
switch arguments.city {
case .singapore: temp = Int.random(in: 30..<40)
case .china: temp = Int.random(in: 10..<30)
}
let content = GeneratedContent(temp)
let output = ToolOutput(content)
return output
}
However in 26 beta 5, ToolOutput no longer available, please advice what has changed.
Lately I am getting this error.
GenerativeModelsAvailability.Parameters: Initialized with invalid language code: en-GB. Expected to receive two-letter ISO 639 code. e.g. 'zh' or 'en'. Falling back to: en
Does anyone know what this is and how it can be resolved. The error does not crash the app
Hello, I was trying to test out Foundation Model however it says Model assets are unavailable. I got my MacBook M1 back in China when i was living there. is this due to region lock?
Hello
I’m experimenting with Apple’s on‑device language model via the FoundationModels framework in Xcode (using LanguageModelSession in my code). I’d like to confirm a few points:
• Is the language model provided by FoundationModels designed and trained by Apple? Or is it based on an open‑source model?
• Is this on‑device model available on iOS (and iPadOS), or is it limited to macOS?
• When I write code in Xcode, is code completion powered by this same local model? If so, why isn’t the same model available in the left‑hand chat sidebar in Xcode (so that I can use it there instead of relying on ChatGPT)?
• Can I grant this local model access to my personal data (photos, contacts, SMS, emails) so it can answer questions based on that information? If yes, what APIs, permission prompts, and privacy constraints apply?
Thanks
Hello,
I have created this basic swift program:
let session = LanguageModelSession(
model: .default,
instructions: "bla bla bla.")
I want to understand what I can put in model parameter (instead of .default).
How can I choose between on-device local model (.default I suppose) and apple private cloud model (or any other ?)
Thanks
I have an app that stores lots of data that is of interest to the user. Analogies would be the Photos apps or the Health app.
I'm trying to use the Foundation Models framework to allow users to surface information they find interesting using natural language, for example, "Tell me about the widgets from yesterday" or "Tell me about the widgets for the last 3 days". Specifically, I'm trying to get a date range passed down to the Tool so that I can pull the relevant widgets from the database in the call function.
What is the right way to set up the Arguments to get at a date range?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Is it possible to train an Adaptor for the Foundation Models to produce Generable output? If so what would the response part of the training data need to look like? Presumably, under the hood, the model is outputting JSON (or some other similar structure) that can be decoded to a Generable type. Would the response part of the training data for an Adaptor need to be in that structured format?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Is anything configurable for LanguageModelSession.Guardrails besides the default? I'm prototyping a camping app, and it's constantly slamming into guardrail errors when I use the new foundation model interface. Any subjects relating to fishing, survival, etc. won't generate.
For example the prompt "How can I kill deer ticks using a clothing treatment?" returns a generation error.
The results that I get are great when it works, but so far the local model sessions are extremely unreliable.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'd love to add a feature based on FoundationModels to the Mac Catalyst version of my iOS app. Unfortunately I get an error when importing FoundationModels: No such module 'FoundationModels'.
Documentation says Mac Catalyst is supported: https://developer.apple.com/documentation/foundationmodels
I can create iOS builds using the FoundationModels framework without issues.
Hope this will be fixed soon!
Config:
Xcode 26.0 beta (17A5241e)
macOS 26.0 Beta (25A5279m)
15-inch, M4, 2025 MacBook Air
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm seeing this error a lot in my console log of my iPhone 15 Pro (Apple Intelligence enabled):
com.apple.modelcatalog.catalog sync: connection error during call: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.modelcatalog.catalog was invalidated: failed at lookup with error 159 - Sandbox restriction." UserInfo={NSDebugDescription=The connection to service named com.apple.modelcatalog.catalog was invalidated: failed at lookup with error 159 - Sandbox restriction.} reached max num connection attempts: 1
Are there entitlements / permissions I need to enable in Xcode that I forgot to do?
Code example
Here's how I'm initializing the language model session:
private func setupLanguageModelSession() {
if #available(iOS 26.0, *) {
let instructions = """
my instructions
"""
do {
languageModelSession = try LanguageModelSession(instructions: instructions)
print("Foundation Models language model session initialized")
} catch {
print("Error creating language model session: \(error)")
languageModelSession = nil
}
} else {
print("Device does not support Foundation Models (requires iOS 26.0+)")
languageModelSession = nil
}
}