Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Will Apple Intelligence Support Third-Party LLMs or Custom AI Agent Integrations?
Hi everyone, I’m an AI engineer working on autonomous AI agents and exploring ways to integrate them into the Apple ecosystem, especially via Siri and Apple Intelligence. I was impressed by Apple’s integration of ChatGPT and its privacy-first design, but I’m curious to know: • Are there plans to support third-party LLMs? • Could Siri or Apple Intelligence call external AI agents or allow extensions to plug in alternative models for reasoning, scheduling, or proactive suggestions? I’m particularly interested in building event-driven, voice-triggered workflows where Apple Intelligence could act as a front-end for more complex autonomous systems (possibly local or cloud-based). This kind of extensibility would open up incredible opportunities for personalized, privacy-friendly use cases — while aligning with Apple’s system architecture. Is anything like this on the roadmap? Or is there a suggested way to prototype such integrations today? Thanks in advance for any thoughts or pointers!
4
0
500
May ’25
ML models failed to decrypt and load
We have suddenly encountered a serious issue: our local ML models are no longer being decrypted. Everything was set up according to the guide at https://developer.apple.com/documentation/coreml/generating-a-model-encryption-key and had been working in production, but yesterday we started receiving the following error: Error Domain=com.apple.CoreML Code=8 "Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID." UserInfo={NSLocalizedDescription=Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID.} We haven’t changed anything in our code. This started spontaneously affecting users of the release version as of yesterday. It also no longer works locally — we receive the same error at the moment the autogenerated function is called: class func load(configuration: MLModelConfiguration = MLModelConfiguration(), completionHandler handler: @escaping (Swift.Result<ZingPDModel, Error>) -> Void) I assume that I can generate a new key through Xcode, integrate it in place of the old one, and it might start working again. However, this won’t affect existing users until they update the app. Could the issue be on Apple’s infrastructure side?
1
0
367
Jul ’25
Is there an API to check if a Core ML compiled model is already cached?
Hello Apple Developer Community, I'm investigating Core ML model loading behavior and noticed that even when the compiled model path remains unchanged after an APP update, the first run still triggers an "uncached load" process. This seems to impact user experience with unnecessary delays. Question: Does Core ML provide any public API to check whether a compiled model (from a specific .mlmodelc path) is already cached in the system? If such API exists, we'd like to use it for pre-loading decision logic - only perform background pre-load when the model isn't cached. Has anyone encountered similar scenarios or found official solutions? Any insights would be greatly appreciated!
2
0
242
May ’25
Difference between compiling a Model using CoreML and Swift-Transformers
Hello, I was successfully able to compile TKDKid1000/TinyLlama-1.1B-Chat-v0.3-CoreML using Core ML, and it's working well. However, I’m now trying to compile the same model using Swift Transformers. With the limited documentation available on the swift-chat and Hugging Face repositories, I’m finding it difficult to understand the correct process for compiling a model via Swift Transformers. I attempted the following approach, but I’m fairly certain it’s not the recommended or correct method. Could someone guide me on the proper way to compile and use models like TinyLlama with Swift Transformers? Any official workflow, example, or best practice would be very helpful. Thanks in advance! This is the approach I have used: import Foundation import CoreML import Tokenizers @main struct HopeApp { static func main() async { print(" Running custom decoder loop...") do { let tokenizer = try await AutoTokenizer.from(pretrained: "PY007/TinyLlama-1.1B-Chat-v0.3") var inputIds = tokenizer("this is the test of the prompt") print("🧠 Prompt token IDs:", inputIds) let model = try float16_model(configuration: .init()) let maxTokens = 30 for _ in 0..<maxTokens { let input = try MLMultiArray(shape: [1, 128], dataType: .int32) let mask = try MLMultiArray(shape: [1, 128], dataType: .int32) for i in 0..<inputIds.count { input[i] = NSNumber(value: inputIds[i]) mask[i] = 1 } for i in inputIds.count..<128 { input[i] = 0 mask[i] = 0 } let output = try model.prediction(input_ids: input, attention_mask: mask) let logits = output.logits // shape: [1, seqLen, vocabSize] let lastIndex = inputIds.count - 1 let lastLogitsStart = lastIndex * 32003 // vocab size = 32003 var nextToken = 0 var maxLogit: Float32 = -Float.greatestFiniteMagnitude for i in 0..<32003 { let logit = logits[lastLogitsStart + i].floatValue if logit > maxLogit { maxLogit = logit nextToken = i } } inputIds.append(nextToken) if nextToken == 32002 { break } let partialText = try await tokenizer.decode(tokens:inputIds) print(partialText) } } catch { print("❌ Error: \(error)") } } }
1
0
193
Jun ’25
Ways I can leverage AI when the user asks Siri, "What does this word mean"
I'm the creator of an app that helps users learn Arabic. Inside of the app users can save words, engage in lessons specific to certain grammar concepts etc. I'm looking for a way for Siri to 'suggest' my app when the user asks to define any Arabic words. There are other questions that I would like for Siri to suggest my app for, but I figure that's a good start. What framework am I looking for here? I think AppItents? I remember I played with it for a bit last year but didn't get far. Any suggestions would be great. Would the new Foundations model be any help here?
2
0
138
Jun ’25
Issue with #Playground and Foundation Model
Hi all, I’m encountering an issue when trying to run Apple Foundation Models in a blank project targeting iOS 26. Below are the details: Xcode: Latest version with iOS 26 SDK macOS: macOS 26 Tahoe (installed on main disk) Mac: 16” MacBook Pro with M2 Pro chip Apple Intelligence: Available and functional on this machine Problem: I created a new blank iOS project, set the deployment target to iOS 26, and ran the following minimal code using Foundation Models. However, I get no response at all in the output - not even an error. The app runs, but the model does not produce any output. #Playground { let session = LanguageModelSession() let response = try await session.respond(to: "Tell me a story") } Then, I tried to catch an error with this code: #Playground { let session = LanguageModelSession() do { let response = try await session.respond(to: "Tell me a story") print(response) } catch { print("Failed to get response:", error) } print("This line, never gets executed") } And got these results: I’ve done further testing and discovered something important: I tried running the Code Along sample project, and there the #Playground macro worked without issues. The only significant difference I noticed was the Canvas run destination: In my original project, I was using iPhone 16 Pro (iOS 26) as the run target in Canvas. Apple Intelligence was enabled on the simulator, but no response was returned when executing the prompt. In the sample project, the Canvas was running on My Mac. I attempted to match that setup, but at first, my destination was My Mac (Designed for iPad), which still didn’t work. The macro finally executed properly once I switched to My Mac (AppKit). So the question is ... it seems that for now, Foundation Models and the #Playground macro only run correctly when the canvas or destination is set to “My Mac (AppKit)”?
7
0
533
Jul ’25
Is it allowed for an iOS app to download machine learning model files (e.g., .mlmodel, .onnx) from a separate cloud server?
Hello, I am developing an iOS app that uses machine learning models. To improve accuracy and user experience, I would like to download .mlmodel files (compiled and compressed as zip files) from our own server after the app is installed, and use them for inference within the app. No executable code, scripts, or dynamic libraries will be downloaded—only model data files are used. According to App Store Review Guideline 2.5.2, I understand that apps may not download or execute code which introduces or changes features or functionality. In this case, are compiled and zip-compressed .mlmodel files considered "data" rather than "code", and is it allowed to download and use them in the app? If there are any restrictions or best practices related to this, please let me know. Thank you.
1
0
378
Jul ’25
Proposal: Develop a Token Estimation Tool for Foundation Models
Dear Apple Foundation Models Development Team, I am a developer integrating Apple Foundation Models (AFM) into my app and encountered the exceededContextWindowSize error when exceeding the 4096-token limit. Proposal: I suggest Apple develop a tool to estimate the token count of a prompt before sending it to the model. This tool could be integrated into FoundationModels Framework for ease of use. Benefits: A token estimation tool would help developers manage the context window limit and optimize performance. I hope Apple considers this proposal soon. Thank you!
6
0
361
Aug ’25
Unified Use Case Mail Categories & Spam
Hi Apple product owners. I am missing a unified concept which might be derived from the use cases for mail categories and mail spam for the app "Mail" on Mac. I need a recommendation on how to use categories in combination with the spam filter to get most out of it. So I was looking for the use cases for the 2 functionality areas in order to figure out how to organise my mails by using as much automation as possible before I start creating intelligent folders in addition. What can you recommend where I get this information from? I don't want to guess or read a lot of forum contributions which are based on guesses.
1
0
87
Apr ’25
Vision Framework - Testing RecognizeDocumentsRequest
How do I test the new RecognizeDocumentRequest API. Reference: https://www.youtube.com/watch?v=H-GCNsXdKzM I am running Xcode Beta, however I only have one primary device that I cannot install beta software on. Please provide a strategy for testing. Will simulator work? The new capability is critical to my application, just what I need for structuring document scans and extraction. Thank you.
1
0
241
Jun ’25
ModelManager received unentitled request. Expected entitlement com.apple.modelmanager.inference
Just tried to write a very simple test of using foundation models, but it gave me the error like this "ModelManager received unentitled request. Expected entitlement com.apple.modelmanager.inference establishment of session failed with Missing entitlement: com.apple.modelmanager.inference" The simple code is listed below: let session: LanguageModelSession = LanguageModelSession() let response = try? await session.respond(to: "What is the capital of France?") print("Response: (response)") So what's the problem of this one?
2
0
258
Jul ’25
FoundationModels and Core Data
Hi, I have an app that uses Core Data to store user information and display it in various views. I want to know if it's possible to easily integrate this setup with FoundationModels to make it easier for the user to query and manipulate the information, and if so, how would I go about it? Can the model be pointed to the database schema file and the SQLite file sitting in the user's app group container to parse out the information needed? And/or should the NSManagedObjects be made @Generable for better output? Any guidance about this would be useful.
1
0
213
Jun ’25
Dynamically Create Tool Argument Type
According to the Tool documentation, the arguments to the tool are specified as a static struct type T, which is given to tool.call(argument: T) However, if the arguments are not known until runtime, is it possible to still create a Tool object with the proper parameters? Let's say a JSON-style dictionary is passed into the Tool init function to specify T, is this achievable?
1
0
472
Jul ’25
Unavailable error is wrong?
This is my code: witch SystemLanguageModel.default.availability { case .available: ContentView() .popover(isPresented: $showSettings) { SettingsView().presentationCompactAdaptation(.popover) } case .unavailable(.modelNotReady): ContentUnavailableView("Apple Intelligence is unavailable", systemImage: "apple.intelligence.badge.xmark", description: Text("Please come back later.")) case .unavailable(.appleIntelligenceNotEnabled): ContentUnavailableView("Apple Intelligence is unavailable", systemImage: "apple.intelligence.badge.xmark", description: Text("Please turn on Apple Intelligence.")) case .unavailable(.deviceNotEligible): ContentUnavailableView("Apple Intelligence is unavailable", systemImage: "apple.intelligence.badge.xmark", description: Text("This device is not eligible for Apple Intelligence.")) case .unavailable: ContentUnavailableView("Apple Intelligence is unavailable", systemImage: "apple.intelligence.badge.xmark") } When I switch off Apple Intelligence, I expected "Please turn on Apple Intelligence.", but instead I get "Please come back later." This seems to be wrong error?
1
0
281
Jul ’25
Image Playground Error: Unable to Generate Images Using externalProvider Style
I’m working on generating images using Image Playground. The code works fine for other styles but fails when using an external provider. I don’t see any other requirements mentioned in the documentation. Has anyone else encountered a similar issue? Here’s the relevant code snippet: https://developer.apple.com/documentation/imageplayground/imageplaygroundstyle/externalprovider?changes=_2 The error message is also not very helpful. It simply states that the creation failed. Note: I have enabled ChatGPT Plus, and the image generation using ChatGPT styles works fine when using the Playground app. do { let creator = try await ImageCreator() let concept = ImagePlaygroundConcept.text("Love") let images = creator.images(for: [concept], style: .externalProvider, limit: 1) for try await image in images { // Handle image break } } catch { // Handle error } I’m using the iOS 26 RC, and when I print creator.availableStyles, it doesn’t display the external Provider. [ImagePlayground.ImagePlaygroundStyle(id: "animation", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "emoji", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "illustration", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "sketch", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "messages-background", _representationInfo: nil)]
1
0
904
Sep ’25
Cannot find type ToolOutput in scope
My sample app has been working with the following code: func call(arguments: Arguments) async throws -&gt; ToolOutput { var temp:Int switch arguments.city { case .singapore: temp = Int.random(in: 30..&lt;40) case .china: temp = Int.random(in: 10..&lt;30) } let content = GeneratedContent(temp) let output = ToolOutput(content) return output } However in 26 beta 5, ToolOutput no longer available, please advice what has changed.
3
0
247
Aug ’25