Has Apple made any commitment to versioning the Foundation Models on device? What if you build a feature that works great on 26.0 but they change the model or guardrails in 26.1 and it breaks your feature, is your only recourse filing Feedback or pulling the feature from the app? Will there be a way to specify a model version like in all of the server based LLM provider APIs? If not, sounds risky to build on.
Foundation Models
RSS for tagDiscuss the Foundation Models framework which provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
According to the Tool documentation, the arguments to the tool are specified as a static struct type T, which is given to tool.call(argument: T) However, if the arguments are not known until runtime, is it possible to still create a Tool object with the proper parameters? Let's say a JSON-style dictionary is passed into the Tool init function to specify T, is this achievable?
Just tried to write a very simple test of using foundation models, but it gave me the error like this
"ModelManager received unentitled request. Expected entitlement com.apple.modelmanager.inference
establishment of session failed with Missing entitlement: com.apple.modelmanager.inference"
The simple code is listed below:
let session: LanguageModelSession = LanguageModelSession()
let response = try? await session.respond(to: "What is the capital of France?")
print("Response: (response)")
So what's the problem of this one?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Does the Foundation Model provide Objective-C compatible APIs?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello everybody!
I’m encountering an unexpected guardrailViolation error when using Foundation Models on macOS Beta 3 (Tahoe) with an Apple M2 Pro chip. This issue didn’t occur on Beta 1 or Beta 2 using the same codebase.
Reproduction Context
I’m developing an app that leverages Foundation Models for structured generation, paired with a local database tool. After upgrading to macOS Beta 3, I started receiving this error consistently, despite no changes in the generation logic.
To isolate the issue, I opened the official WWDC sample project from the Adding intelligent app features with generative models and the same guardrailViolation error appeared without any modifications.
Simplified Working Example
I attempted to narrow down the issue by starting with a minimal prompt structure. This basic case works fine:
import Foundation
import Playgrounds
import FoundationModels
@Generable
struct GeneableLandmark {
@Guide(description: "Name of the landmark to visit")
var name: String
}
final class LandmarkSuggestionGenerator {
var landmarkSuggestion: GeneableLandmark.PartiallyGenerated?
private var session: LanguageModelSession
init(){
self.session = LanguageModelSession(
instructions: Instructions {
"""
generate a list of landmarks to visit
"""
}
)
}
func createLandmarkSuggestion(location: String) async throws {
let stream = session.streamResponse(
generating: GeneableLandmark.self,
options: GenerationOptions(sampling: .greedy),
includeSchemaInPrompt: false
) {
"""
Generate a list of landmarks to viist in \(location)
"""
}
for try await partialResponse in stream {
landmarkSuggestion = partialResponse
}
}
}
#Playground {
let generator = LandmarkSuggestionGenerator()
Task {
do {
try await generator.createLandmarkSuggestion(location: "New york")
if let suggestion = generator.landmarkSuggestion {
print("Suggested landmark: \(suggestion)")
} else {
print("No suggestion generated.")
}
} catch {
print("Error generating landmark suggestion: \(error)")
}
}
}
But as soon as I use the Sample ItineraryPlanner:
#Playground {
// Example landmark for demonstration
let exampleLandmark = Landmark(
id: 1,
name: "San Francisco",
continent: "North America",
description: "A vibrant city by the bay known for the Golden Gate Bridge.",
shortDescription: "Iconic Californian city.",
latitude: 37.7749,
longitude: -122.4194,
span: 0.2,
placeID: nil
)
let planner = ItineraryPlanner(landmark: exampleLandmark)
Task {
do {
try await planner.suggestItinerary(dayCount: 3)
if let itinerary = planner.itinerary {
print("Suggested itinerary: \(itinerary)")
} else {
print("No itinerary generated.")
}
} catch {
print("Error generating itinerary: \(error)")
}
}
}
The error pops up:
Multiline
Error generating itinerary:
guardrailViolation(FoundationModels.LanguageModelSession. >GenerationError.Context(debug
Description: "May contain sensitive or unsafe content", >underlyingErrors:
[FoundationModels. LanguageModelSession. Gene >rationError.guardrailViolation(FoundationMo dels. >LanguageModelSession.GenerationError.C ontext (debugDescription: >"May contain unsafe content", underlyingErrors: []))]))
Based on my tests:
The error may not be tied to structure complexity (since more nested structures work)
The issue may stem from the tools or prompt content used inside the ItineraryPlanner
The guardrail sensitivity may have increased or changed in Beta 3, affecting models that worked in earlier betas
Thank you in advance for your help. Let me know if more details or reproducible code samples are needed - I’m happy to provide them.
Best,
Sasha Morozov
I have been using "apple" to test foundation models.
I thought this is local, but today the answer changed - half way through explanation, suddenly guardrailViolation error was activated! And yesterday, all reference to "Apple II", "Apple III" now refers me to consult apple.com!
Does foundation models connect to Internet for answer? Using beta 3.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm interested in using Foundation Models to act as an AI support agent for our extensive in-app documentation. We have many pages of in-app documents, which the user can currently search, but it would be great to use Foundation Models to let the user get answers to arbitrary questions.
Is this possible with the current version of Foundation Models? It seems like the way to add new context to the model is with the instructions parameter on LanguageModelSession. As I understand it, the combined instructions and prompt need to consume less than 4096 tokens.
That definitely wouldn't be enough for the amount of documentation I want the agent to be able to refer to. Is there another way of doing this, maybe as a series of recursive queries? If there is a solution based on multiple queries, should I expect this to be fast enough for interactive use?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Apologies if this is obvious to everyone but me... I'm using the Tahoe AI foundation models. When I get an error, I'm trying to handle it properly.
I see the errors described here: https://developer.apple.com/documentation/foundationmodels/languagemodelsession/generationerror/context, as well as in the headers. But all I can figure out how to see is error.localizedDescription which doesn't give me much to go on.
For example, an error's description is:
The operation couldn’t be completed. (FoundationModels.LanguageModelSession.GenerationError error 2.
That doesn't give me much to go on. How do I get the actual error number/enum value out of this, short of parsing that text to look for the int at the end?
This one is:
case guardrailViolation(LanguageModelSession.GenerationError.Context)
So I'd like to know how to get from the catch for session.respond to something I can act on. I feel like it's there, but I'm missing it.
Thanks!
In the play ground I'm trying to bias my LanguageModel to use a tool I registered, but I don't see it actually calling the tool. I'm following the developer video on landmarks itinerary generation tutorial almost verbatim. Is this a prompt engineering thing I'm missing? Or is it possible that I'm injecting my tool wrong?
Hey
Tried using a few regular expressions and all fail with an error:
Unhandled error streaming response: A generation guide with an unsupported pattern was used.
Is there are a list of supported features? I don't see it in docs, and it takes RegExp.
Anything with e.g. [A-Z] fails.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
@Generable
enum Breakfast {
case waffles
case pancakes
case bagels
case eggs
}
do {
let session = LanguageModelSession()
let userInput = "I want something sweet."
let prompt = "Pick the ideal breakfast for request: (userInput)"
let response = try await session.respond(to: prompt,generating: Breakfast.self)
print(response.content)
} catch let error {
print(error)
}
i want to test the @Generable demo but get error with below:decodingFailure(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Failed to convert text into into GeneratedContent\nText: waffles", underlyingErrors: [Swift.DecodingError.dataCorrupted(Swift.DecodingError.Context(codingPath: [], debugDescription: "The given data was not valid JSON.", underlyingError: Optional(Error Domain=NSCocoaErrorDomain Code=3840 "Unexpected character 'w' around line 1, column 1." UserInfo={NSJSONSerializationErrorIndex=0, NSDebugDescription=Unexpected character 'w' around line 1, column 1.})))]))
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Almost everywhere else you see Apple Intelligence, you get to select whether it's on device, private cloud compute, or ChatGPT. Is there a way to do that via code in the Foundation Model? I searched through the docs and couldn't find anything, but maybe I missed it.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hey,
When generating responses with structured output and non-streaming API, it sometimes takes 3s, sometimes 10-20s. I am firing that request subsequently while testing the app.
Is this by design, or any place I can learn more about what contributes to such variation?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hey,
I receive GenerableContent as follows:
let response = try await session.respond(to: "", schema: generationSchema)
And it wraps GeneratedJSON which seems to be private.
What is the best way to get a string / raw value out of it? I noticed it could theoretically be accessed via transcriptEntries but it's not ideal.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
While runninf Apple Foundation Model in iPhone simulator, I got this error:
IPC error: Underlying connection interrupted
What does this mean? Related to foundation model?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi everyone,
I’m currently exploring the use of Foundation models on Apple platforms to build a chatbot-style assistant within an app. While the integration part is straightforward using the new FoundationModel APIs, I’m trying to figure out how to control the assistant’s responses more tightly — particularly:
Ensuring the assistant adheres to a specific tone, context, or domain (e.g. hospitality, healthcare, etc.)
Preventing hallucinations or unrelated outputs
Constraining responses based on app-specific rules, structured data, or recent interactions
I’ve experimented with prompt, systemMessage, and few-shot examples to steer outputs, but even with carefully generated prompts, the model occasionally produces incorrect or out-of-scope responses.
Additionally, when using multiple tools, I'm unsure how best to structure the setup so the model can select the correct pathway/tool and respond appropriately. Is there a recommended approach to guiding the model's decision-making when several tools or structured contexts are involved?
Looking forward to hearing your thoughts or being pointed toward related WWDC sessions, Apple docs, or sample projects.
Is there anywhere we can reference error codes? I'm getting this error: "The operation couldn’t be completed. (FoundationModels.LanguageModelSession.GenerationError error 4.)" and I have no idea of what it means or what to attempt to fix.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Machine Learning
Create ML
Apple Intelligence
I am writing a custom package wrapping Foundation Models which provides a chain-of-thought with intermittent self-evaluation among other things. At first I was designing this package with the command line in mind, but after seeing how well it augments the models and makes them more intelligent I wanted to try and build a SwiftUI wrapper around the package.
When I started I was using synchronous generation rather than streaming, but to give the best user experience (as I've seen in the WWDC sessions) it is necessary to provide constant feedback to the user that something is happening.
I have created a super simplified example of my setup so it's easier to understand.
First, there is the Reasoning conversation item, which can be converted to an XML representation which is then fed back into the model (I've found XML works best for structured input)
public typealias ConversationContext = XMLDocument
extension ConversationContext {
public func toPlainText() -> String {
return xmlString(options: [.nodePrettyPrint])
}
}
/// Represents a reasoning item in a conversation, which includes a title and reasoning content.
/// Reasoning items are used to provide detailed explanations or justifications for certain decisions or responses within a conversation.
@Generable(description: "A reasoning item in a conversation, containing content and a title.")
struct ConversationReasoningItem: ConversationItem {
@Guide(description: "The content of the reasoning item, which is your thinking process or explanation")
public var reasoningContent: String
@Guide(description: "A short summary of the reasoning content, digestible in an interface.")
public var title: String
@Guide(description: "Indicates whether reasoning is complete")
public var done: Bool
}
extension ConversationReasoningItem: ConversationContextProvider {
public func toContext() -> ConversationContext {
// <ReasoningItem title="${title}">
// ${reasoningContent}
// </ReasoningItem>
let root = XMLElement(name: "ReasoningItem")
root.addAttribute(XMLNode.attribute(withName: "title", stringValue: title) as! XMLNode)
root.stringValue = reasoningContent
return ConversationContext(rootElement: root)
}
}
Then there is the generator, which creates a reasoning item from a user query and previously generated items:
struct ReasoningItemGenerator {
var instructions: String {
"""
<omitted for brevity>
"""
}
func generate(from input: (String, [ConversationReasoningItem])) async throws -> sending LanguageModelSession.ResponseStream<ConversationReasoningItem> {
let session = LanguageModelSession(instructions: instructions)
// build the context for the reasoning item out of the user's query and the previous reasoning items
let userQuery = "User's query: \(input.0)"
let reasoningItemsText = input.1.map { $0.toContext().toPlainText() }.joined(separator: "\n")
let context = userQuery + "\n" + reasoningItemsText
let reasoningItemResponse = try await session.streamResponse(
to: context, generating: ConversationReasoningItem.self)
return reasoningItemResponse
}
}
I'm not sure if returning LanguageModelSession.ResponseStream<ConversationReasoningItem> is the right move, I am just trying to imitate what session.streamResponse returns.
Then there is the orchestrator, which I can't figure out. It receives the streamed ConversationReasoningItems from the Generator and is responsible for streaming those to SwiftUI later and also for evaluating each reasoning item after it is complete to see if it needs to be regenerated (to keep the model on-track). I want the users of the orchestrator to receive partially generated reasoning items as they are being generated by the generator. Later, when they finish, if the evaluation passes, the item is kept, but if it fails, the reasoning item should be removed from the stream before a new one is generated. So in-flight reasoning items should be outputted aggresively.
I really am having trouble figuring this out so if someone with more knowledge about asynchronous stuff in Swift, or- even better- someone who has worked on the Foundation Models framework could point me in the right direction, that would be awesome!
Hi all, I'm working on an app that utilizes the FoundationModels found in iOS 26. I updated my phone to iOS 26 beta 3 and am now receiving the following error when trying to run code that worked in beta 2:
Al Error: The operation couldn't be completed. (FoundationModels.LanguageModelSession.Genera-
tionError error 2.)
I admit I'm a bit of a new developer, but any idea if this is an issue with beta 3 or work that I'll need to do to adapt my code to some changes in the AI API?
Thank you!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Testing Foundation Models framework with a health-focused recipe generation app. The on-device approach is appealing but performance is rough. Taking 20+ seconds just to get recipe name and description. Same content from Claude API: 4 seconds.
I know it's beta and on-device has different tradeoffs, but this is approaching unusable territory for real-time user experience. The streaming helps psychologically but doesn't mask the underlying latency.The privacy/cost benefits are compelling but not if users abandon the feature before it completes.
Anyone else seeing similar performance? Is this expected for beta, or are there optimization techniques I'm missing?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models