Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

What is the proper way to integrate a CoreML app into Xcode
Hi, I have been trying to integrate a CoreML model into Xcode. The model was made using tensorflow layers. I have included both the model info and a link to the app repository. I am mainly just really confused on why its not working. It seems to only be printing the result for case 1 (there are 4 cases labled, case 0, case 1, case 2, and case 3). If someone could help work me through this error that would be great! here is the link to the repository: https://github.com/ShivenKhurana1/Detect-to-Protect-App this file with the model code is called SecondView.swift and here is the model info: Input: conv2d_input-> image (color 224x224) Output: Identity -> MultiArray (Float32 1x4)
1
1
176
Apr ’25
videotoolbox superresolution
Hello, I'm using videotoolbox superresolution API in MACOS 26: https://developer.apple.com/documentation/videotoolbox/vtsuperresolutionscalerconfiguration/downloadconfigurationmodel(completionhandler:)?language=objc, when using swift, it's ok, when using objective-c, I get error when downloading model with downloadConfigurationModelWithCompletionHandler: [Auto] MA-auto{_failedLockContent} | failure reported by server | error:[com.apple.MobileAssetError.AutoAsset:MissingReference(6111)] [Auto] MA-auto{_failedLockContent} | failure reported by server | error:[com.apple.MobileAssetError.AutoAsset:UnderlyingError(6107)_1_com.apple.MobileAssetError.Download:47] Download completion handler called with error: The operation couldnxe2x80x99t be completed. (VTFrameProcessorErrorDomain error -19743.)
3
1
718
3w
Is it possible to read japanese tategaki with vision framework
We are building an app which can reads texts. It can read english and Japanese normal texts successfully. But in some cases, we need to read Japanese tategaki (vertically aligned texts). But in that times, the same code gives no output. So, is there any need to change any configuration to read Japanese tategaki? Or is it really possible to read Japanese tategaki using vision framework? lazy var detectTextRequest = VNRecognizeTextRequest { request, error in self.resStr="\n" self.words = [:] // Get OCR result guard let res = request.results as? [VNRecognizedTextObservation] else { return } // separate the words by space let text = res.compactMap({$0.topCandidates(1).first?.string}).joined(separator: " ") var n = 0 self.wordArr=[[]] self.xs = 1 self.ys = 1 var hs = 0.0 // To compare the heights of the words // To get the original axis (top most word's axis), only once for r in res { var word = r.topCandidates(1).first?.string self.words[word ?? ""] = [r.topLeft.x, r.topLeft.y] if(self.cartLabelType == 1){ if(word?.components(separatedBy: CharacterSet(charactersIn: "//")).count ?? 0>2){ self.xs = r.topLeft.x self.ys = r.topLeft.y } } } } }
2
1
612
Jan ’25
Problem running NLContextualEmbeddingModel in simulator
Environment MacOC 26 Xcode Version 26.0 beta 7 (17A5305k) simulator: iPhone 16 pro iOS: iOS 26 Problem NLContextualEmbedding.load() fails with the following error In simulator Failed to load embedding from MIL representation: filesystem error: in create_directories: Permission denied ["/var/db/com.apple.naturallanguaged/com.apple.e5rt.e5bundlecache"] filesystem error: in create_directories: Permission denied ["/var/db/com.apple.naturallanguaged/com.apple.e5rt.e5bundlecache"] Failed to load embedding model 'mul_Latn' - '5C45D94E-BAB4-4927-94B6-8B5745C46289' assetRequestFailed(Optional(Error Domain=NLNaturalLanguageErrorDomain Code=7 "Embedding model requires compilation" UserInfo={NSLocalizedDescription=Embedding model requires compilation})) in #Playground I'm new to this embedding model. Not sure if it's caused by my code or environment. Code snippet import Foundation import NaturalLanguage import Playgrounds #Playground { // Prefer initializing by script for broader coverage; returns NLContextualEmbedding? guard let embeddingModel = NLContextualEmbedding(script: .latin) else { print("Failed to create NLContextualEmbedding") return } print(embeddingModel.hasAvailableAssets) do { try embeddingModel.load() print("Model loaded") } catch { print("Failed to load model: \(error)") } }
1
1
654
2d
GenerationError -1 / 1026
Hi, I was using Foundation Models in my app, and suddenly it just stopped working from one moment to the next. To double-check, I created a small test in Playgrounds, but I’m getting the exact same error there too. #Playground { let session = LanguageModelSession() let prompt = "please answer a word" do { let response = try await session.respond(to: prompt) } catch { print("error is \(error)") } } error is Error Domain=FoundationModels.LanguageModelSession.GenerationError Code=-1 "(null)" UserInfo={NSMultipleUnderlyingErrorsKey=( "Error Domain=ModelManagerServices.ModelManagerError Code=1026 \"(null)\" UserInfo={NSMultipleUnderlyingErrorsKey=(\n)}" )} I’m no longer able to get any response from the framework anywhere, even in a fresh project. It's been 5 days. Has anyone else experienced this issue or knows what could be causing it? Thanks in advance! Tahoe 26.2 beta 1, Xcode 26.1.1, iPhone Air simulator 26.1
6
1
526
Nov ’25
FoundationModels guardrailViolation on Beta 3
Hello everybody! I’m encountering an unexpected guardrailViolation error when using Foundation Models on macOS Beta 3 (Tahoe) with an Apple M2 Pro chip. This issue didn’t occur on Beta 1 or Beta 2 using the same codebase. Reproduction Context I’m developing an app that leverages Foundation Models for structured generation, paired with a local database tool. After upgrading to macOS Beta 3, I started receiving this error consistently, despite no changes in the generation logic. To isolate the issue, I opened the official WWDC sample project from the Adding intelligent app features with generative models and the same guardrailViolation error appeared without any modifications. Simplified Working Example I attempted to narrow down the issue by starting with a minimal prompt structure. This basic case works fine: import Foundation import Playgrounds import FoundationModels @Generable struct GeneableLandmark { @Guide(description: "Name of the landmark to visit") var name: String } final class LandmarkSuggestionGenerator { var landmarkSuggestion: GeneableLandmark.PartiallyGenerated? private var session: LanguageModelSession init(){ self.session = LanguageModelSession( instructions: Instructions { """ generate a list of landmarks to visit """ } ) } func createLandmarkSuggestion(location: String) async throws { let stream = session.streamResponse( generating: GeneableLandmark.self, options: GenerationOptions(sampling: .greedy), includeSchemaInPrompt: false ) { """ Generate a list of landmarks to viist in \(location) """ } for try await partialResponse in stream { landmarkSuggestion = partialResponse } } } #Playground { let generator = LandmarkSuggestionGenerator() Task { do { try await generator.createLandmarkSuggestion(location: "New york") if let suggestion = generator.landmarkSuggestion { print("Suggested landmark: \(suggestion)") } else { print("No suggestion generated.") } } catch { print("Error generating landmark suggestion: \(error)") } } } But as soon as I use the Sample ItineraryPlanner: #Playground { // Example landmark for demonstration let exampleLandmark = Landmark( id: 1, name: "San Francisco", continent: "North America", description: "A vibrant city by the bay known for the Golden Gate Bridge.", shortDescription: "Iconic Californian city.", latitude: 37.7749, longitude: -122.4194, span: 0.2, placeID: nil ) let planner = ItineraryPlanner(landmark: exampleLandmark) Task { do { try await planner.suggestItinerary(dayCount: 3) if let itinerary = planner.itinerary { print("Suggested itinerary: \(itinerary)") } else { print("No itinerary generated.") } } catch { print("Error generating itinerary: \(error)") } } } The error pops up: Multiline Error generating itinerary: guardrailViolation(FoundationModels.LanguageModelSession. >GenerationError.Context(debug Description: "May contain sensitive or unsafe content", >underlyingErrors: [FoundationModels. LanguageModelSession. Gene >rationError.guardrailViolation(FoundationMo dels. >LanguageModelSession.GenerationError.C ontext (debugDescription: >"May contain unsafe content", underlyingErrors: []))])) Based on my tests: The error may not be tied to structure complexity (since more nested structures work) The issue may stem from the tools or prompt content used inside the ItineraryPlanner The guardrail sensitivity may have increased or changed in Beta 3, affecting models that worked in earlier betas Thank you in advance for your help. Let me know if more details or reproducible code samples are needed - I’m happy to provide them. Best, Sasha Morozov
2
1
400
Jul ’25
Training adapter, it won't call my tool
Hi all. My adapter model just won't invoke my tool. The problem I am having is covered in an older post: https://developer.apple.com/forums/thread/794839?answerId=852262022#852262022 Sadly the thread dies there and no resolution is seen in that thread. It's worth noting that I have developed an AI chatbot built around LanguageModelSession to which I feed the exact same system prompt that I feed to my training set (pasted further in this post). The AI chatbot works perfectly, the tool is invoked when needed. I am training the adapter model because the base model whilst capable doesn't produce the quality I'm looking for. So here's the template of an item in my training set: [ { 'role': 'system', 'content': systemPrompt, 'tools': [TOOL_DEFINITION] }, { 'role': 'user', 'content': entry['prompt'] }, { 'role': 'assistant', 'content': entry['code'] } ] where TOOL_DEFINITION = { 'type': 'function', 'function': { 'name': 'WriteUbersichtWidgetToFileSystem', 'description': 'Writes an Übersicht Widget to the file system. Call this tool as the last step in processing a prompt that generates a widget.', 'parameters': { 'type': 'object', 'properties': { 'jsxContent': { 'type': 'string', 'description': 'Complete JSX code for an Übersicht widget. This should include all required exports: command, refreshFrequency, render, and className. The JSX should be a complete, valid Übersicht widget file.' } }, 'required': ['jsxContent'] } } ... and systemPrompt = A conversation between a user and a helpful assistant. You are an Übersicht widget designer. Create Übersicht widgets when requested by the user. IMPORTANT: You have access to a tool called WriteUbersichtWidgetToFileSystem. When asked to create a widget, you MUST call this tool. ### Tool Usage: Call WriteUbersichtWidgetToFileSystem with complete JSX code that implements the Übersicht Widget API. Generate custom JSX based on the user's specific request - do not copy the example below. ### Übersicht Widget API (REQUIRED): Every Übersicht widget MUST export these 4 items: - export const command: The bash command to execute (string) - export const refreshFrequency: Refresh rate in milliseconds (number) - export const render: React component function that receives {output} prop (function) - export const className: CSS positioning for absolute placement (string) Example format (customize for each request): WriteUbersichtWidgetToFileSystem({jsxContent: `export const command = "echo hello"; export const refreshFrequency = 1000; export const render = ({output}) => { return <div>{output}</div>; }; export const className = "top: 20px; left: 20px;"`}) ### Rules: - The terms "ubersicht widget", "widget", "a widget", "the widget" must all be interpreted as "Übersicht widget" - Generate complete, valid JSX code that follows the Übersicht widget API - When you generate a widget, don't just show JSON or code - you MUST call the WriteUbersichtWidgetToFileSystem tool - Report the results to the user after calling the tool ### Examples: - "Generate a Übersicht widget" → Use WriteUbersichtWidgetToFileSystem tool - "Can you add a widget that shows the time" → Use WriteUbersichtWidgetToFileSystem tool - "Create a widget with a button" → Use WriteUbersichtWidgetToFileSystem tool When the script that I use to compose the full training set is executed, entry['prompt'] and entry['code'] contain the prompt and the resulting JSX code for one of the examples I'm feeding to the training session. This is repeated for about 60 such examples that I have in my sample data collection. Thanks for any help. Michael
6
0
936
Nov ’25
NLModel won't initialize in MessageFilterExtension
i'm trying to create an NLModel within a MessageFilterExtension handler. The code works fine in the main app, but when I try to use it in the extension it fails to initialize. Just this doesn't even work and gets the error below. Single line that fails. SMS_Classifier is the class xcode generated for my model. This line works fine in the main app. let mlModel = try SMS_Classifier(configuration: MLModelConfiguration()).model Error Unable to locate Asset for contextual word embedding model for local en. MLModelAsset: load failed with error Error Domain=com.apple.CoreML Code=0 "initialization of text classifier model with model data failed" UserInfo={NSLocalizedDescription=initialization of text classifier model with model data failed} Any ideas?
3
1
1k
Jan ’25
Rate limit exceeded when using Foundation Model framework
When I use the FoundationModel framework to generate long text, it will always hit an error. "Passing along Client rate limit exceeded, try again later in response to ExecuteRequest" And stop generating. eg. for the prompt "Write a long story", it will almost certainly hit that error after 17 seconds of generation. do{ let session = LanguageModelSession() let prompt: String = "Write a long story" let response = try await session.respond(to: prompt) }catch{} If possible, I want to know how to prevent that error or at least how to handle it.
2
1
704
Jul ’25
Does Image Playground is On-device + Private Cloud ?
Apple's Image Playground primarily performs image generation on-device, but can use secure Private Cloud Compute for more complex requests that require larger models. Private Cloud Compute (PCC) For more complex tasks that require greater computational power than the device can provide, Image Playground leverages Apple's Private Cloud Compute. This system extends the privacy and security of the device to the cloud: Secure Environment: PCC runs on Apple silicon servers and uses a secure enclave to protect data, ensuring requests are processed in a verified, secure environment. No Data Storage: Data is never stored or made accessible to Apple when using PCC; it is used only to fulfill the specific request. Independent Verification: Independent experts are able to inspect the code running on these servers to verify Apple's privacy promises.
3
0
648
4d
Is MCP (Model Context Protocol) supported on iOS/macOS?
Hi team, I’m exploring the Model Context Protocol (MCP), which is used to connect LLMs/AI agents to external tools in a structured way. It's becoming a common standard for automation and agent workflows. Before I go deeper, I want to confirm: Does Apple currently provide any official MCP server, API surface, or SDK on iOS/macOS? From what I see, only third-party MCP servers exist for iOS simulators/devices, and Apple’s own frameworks (Foundation Models, Apple Intelligence) don’t expose MCP endpoints. Is there any chance Apple might introduce MCP support—or publish recommended patterns for safely integrating MCP inside apps or developer tools? I would like to see if I can share my app's data to the MCP server to enable other third-party apps/services to integrate easily
0
1
177
1w
Defining instructions employing Content Tagging Model
Hello It seems the model Content Tagging doesn't obey when I define the type of tag I wish in the instructions parameters, always the output are the main topics. The unique form to get other type of tags like emotions is using Generable + Guided types. The documentation says it is recommended but not mandatory the use instructions. Maybe I'm setting wrongly the instructions but take a look in the attached snapshot. I copied the definition of tagging emotions from the official documentation. The upper example is employing generable and it works but in the example at the botton I set like instruction the same description of emotion and it doesn't work. I tried with other statements with more or less verbose and never output emotions. Could you provide a state using instruction where it works? Current version of model isn't working with instruction?
1
0
372
Oct ’25
no tensorflow-metal past tf 2.18?
Hi We're on tensorflow 2.20 that has support now for python 3.13 (finally!). tensorflow-metal is still only supporting 2.18 which is over a year old. When can we expect to see support in tensorflow-metal for tf 2.20 (or later!) ? I bought a mac thinking I would be able to get great performance from the M processors but here I am using my CPU for my ML projects. If it's taking so long to release it, why not open source it so the community can keep it more up to date? cheers Matt
1
1
294
Nov ’25
CreateML
I'm trying to use the Spatial model to perform Object Tracking on a .usdz file that I create. After loading the file, which I can view correctly in the console, I start the training. Initially, I notice that the disk usage on my PC increases. After several GB, the usage stops, but the training progress remains for hours at 0.00% with the message "About 8hr." How can I understand what the issue is? Has anyone else experienced the same problem? Thanks Diego
1
1
627
Jan ’25
Insufficient memory for Foundational Model Adapter Training
I have a MacBook Pro M3 Pro with 18GB of RAM and was following the instructions to fine tune the foundational model given here: https://developer.apple.com/apple-intelligence/foundation-models-adapter/ However, while following the code sample in the example Jupyter notebook, my Mac hangs on the second code cell. Specifically: from examples.generate import generate_content, GenerationConfiguration from examples.data import Message output = generate_content( [[ Message.from_system("A conversation between a user and a helpful assistant. Taking the role as a play writer assistant for a kids' play."), Message.from_user("Write a script about penguins.") ]], GenerationConfiguration(temperature=0.0, max_new_tokens=128) ) output[0].response After some debugging, I was getting the following error: RuntimeError: MPS backend out of memory (MPS allocated: 22.64 GB, other allocations: 5.78 MB, max allowed: 22.64 GB). Tried to allocate 52.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure). So is my machine not capable enough to adapter train Apple's Foundation Model? And if so, what's the recommended spec and could this be specified somewhere? Thanks!
8
1
257
Jul ’25
Using #Preview with a PartialyGenerated model
I have an app that streams in data from the Foundation Model and I have a card that shows one of the outputs. I want my card to accept a partially generated model but I keep getting a nonsensical error. The error I get on line 59 is: Cannot convert value of type 'FrostDate.VegetableSuggestion.PartiallyGenerated' (aka 'FrostDate.VegetableSuggestion') to expected argument type 'FrostDate.VegetableSuggestion.PartiallyGenerated' Here is my card with preview: import SwiftUI import FoundationModels struct VegetableSuggestionCard: View { let vegetableSuggestion: VegetableSuggestion.PartiallyGenerated init(vegetableSuggestion: VegetableSuggestion.PartiallyGenerated) { self.vegetableSuggestion = vegetableSuggestion } var body: some View { VStack(alignment: .leading, spacing: 8) { if let name = vegetableSuggestion.vegetableName { Text(name) .font(.headline) .frame(maxWidth: .infinity, alignment: .leading) } if let startIndoors = vegetableSuggestion.startSeedsIndoors { Text("Start indoors: \(startIndoors)") .frame(maxWidth: .infinity, alignment: .leading) } if let startOutdoors = vegetableSuggestion.startSeedsOutdoors { Text("Start outdoors: \(startOutdoors)") .frame(maxWidth: .infinity, alignment: .leading) } if let transplant = vegetableSuggestion.transplantSeedlingsOutdoors { Text("Transplant: \(transplant)") .frame(maxWidth: .infinity, alignment: .leading) } if let tips = vegetableSuggestion.tips { Text("Tips: \(tips)") .foregroundStyle(.secondary) .frame(maxWidth: .infinity, alignment: .leading) } } .padding(16) .frame(maxWidth: .infinity, alignment: .leading) .background( RoundedRectangle(cornerRadius: 16, style: .continuous) .fill(.background) .overlay( RoundedRectangle(cornerRadius: 16, style: .continuous) .strokeBorder(.quaternary, lineWidth: 1) ) .shadow(color: Color.black.opacity(0.05), radius: 6, x: 0, y: 2) ) } } #Preview("Vegetable Suggestion Card") { let sample = VegetableSuggestion.PartiallyGenerated( vegetableName: "Tomato", startSeedsIndoors: "6–8 weeks before last frost", startSeedsOutdoors: "After last frost when soil is warm", transplantSeedlingsOutdoors: "1–2 weeks after last frost", tips: "Harden off seedlings; provide full sun and consistent moisture." ) VegetableSuggestionCard(vegetableSuggestion: sample) .padding() .previewLayout(.sizeThatFits) }
1
0
78
Oct ’25
Siri 2.0 (suggests and future updates)
Hey dear developers! This post should be available for the future Siri updates and improvements but also for wishes in this forum so that everyone can share their opinion and idea please stay friendly. have fun! I had already thought about developing a demo app to demonstrate my idea for a better Siri. My change of many: Wish Update: Siri's language recognition capabilities have been significantly enhanced. Instead of manually setting the language, Siri can now automatically recognize the language you intend to use, making language switching much more efficient. Simply speak the language you want to communicate in, and Siri will automatically recognize it and respond accordingly. Whether you speak English, German, or Japanese, Siri will respond in the language you choose.
1
1
781
Oct ’25