Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Selecting GPU for TensorFlow-Metal on Mac Pro (2013) with v0.8.0
Hi everyone, I'm a Mac enthusiast experimenting with tensorflow-metal on my Mac Pro (2013). My question is about GPU selection in tensorflow-metal (v0.8.0), which still supports Intel-based Macs, including my machine. I've noticed that when running TensorFlow with Metal, it automatically selects a GPU, regardless of what I specify using device indices like "gpu:0", "gpu:1", or "gpu:2". I'm wondering if there's a way to manually specify which GPU should be used via an environment variable or another method. For reference, I’ve tried the example from TensorFlow’s guide on multi-GPU selection: https://www.tensorflow.org/guide/gpu#using_a_single_gpu_on_a_multi-gpu_system My goal is to explore performance optimizations by using MirroredStrategy in TensorFlow to leverage multiple GPUs: https://www.tensorflow.org/guide/distributed_training#mirroredstrategy Interestingly, I discovered that the metalcompute Python library (https://pypi.org/project/metalcompute/) allows to utilize manually selected GPUs on my system, allowing for proper multi-GPU computations. This makes me wonder: Is there a hidden environment variable or setting that allows manual GPU selection in tensorflow-metal? Has anyone successfully used MirroredStrategy on multiple GPUs with tensorflow-metal? Would a bridge between metalcompute and tensorflow-metal be necessary for this use case, or is there a more direct approach? I’d love to hear if anyone else has experimented with this or has insights on getting finer control over GPU selection. Any thoughts or suggestions would be greatly appreciated! Thanks!
3
0
263
Mar ’25
LLM size for fine-tuning using MLX in MacBook
Hi, recently i tried to fine-tune Gemma-2-2b mlx model on my macbook (24 GB UMA). The code started running, after few seconds i saw swap size reaching 50GB and ram around 23 GB and then it stopped. I ran the Gemma-2-2b (cuda) on colab, it ran and occupied 27 GB on A100 gpu and worked fine. Here i didn't experienced swap issue. Now my question is if my UMA was more than 27 GB, i also would not have experienced swap disk issue. Thanks.
1
0
365
Oct ’25
Easy way to implement facial recognition
Hi everyone😊, I want to implement facial recognition into my app. I was planning to use createML's image classification, but there seams to be a lot of hassle to implement (the JSON file etc.). Are there some other easy to implement options that don't involve advanced coding. Thanks, Oliver
2
0
612
Feb ’25
Will Apple Intelligence Support Third-Party LLMs or Custom AI Agent Integrations?
Hi everyone, I’m an AI engineer working on autonomous AI agents and exploring ways to integrate them into the Apple ecosystem, especially via Siri and Apple Intelligence. I was impressed by Apple’s integration of ChatGPT and its privacy-first design, but I’m curious to know: • Are there plans to support third-party LLMs? • Could Siri or Apple Intelligence call external AI agents or allow extensions to plug in alternative models for reasoning, scheduling, or proactive suggestions? I’m particularly interested in building event-driven, voice-triggered workflows where Apple Intelligence could act as a front-end for more complex autonomous systems (possibly local or cloud-based). This kind of extensibility would open up incredible opportunities for personalized, privacy-friendly use cases — while aligning with Apple’s system architecture. Is anything like this on the roadmap? Or is there a suggested way to prototype such integrations today? Thanks in advance for any thoughts or pointers!
4
0
497
May ’25
Issue with #Playground and Foundation Model
Hi all, I’m encountering an issue when trying to run Apple Foundation Models in a blank project targeting iOS 26. Below are the details: Xcode: Latest version with iOS 26 SDK macOS: macOS 26 Tahoe (installed on main disk) Mac: 16” MacBook Pro with M2 Pro chip Apple Intelligence: Available and functional on this machine Problem: I created a new blank iOS project, set the deployment target to iOS 26, and ran the following minimal code using Foundation Models. However, I get no response at all in the output - not even an error. The app runs, but the model does not produce any output. #Playground { let session = LanguageModelSession() let response = try await session.respond(to: "Tell me a story") } Then, I tried to catch an error with this code: #Playground { let session = LanguageModelSession() do { let response = try await session.respond(to: "Tell me a story") print(response) } catch { print("Failed to get response:", error) } print("This line, never gets executed") } And got these results: I’ve done further testing and discovered something important: I tried running the Code Along sample project, and there the #Playground macro worked without issues. The only significant difference I noticed was the Canvas run destination: In my original project, I was using iPhone 16 Pro (iOS 26) as the run target in Canvas. Apple Intelligence was enabled on the simulator, but no response was returned when executing the prompt. In the sample project, the Canvas was running on My Mac. I attempted to match that setup, but at first, my destination was My Mac (Designed for iPad), which still didn’t work. The macro finally executed properly once I switched to My Mac (AppKit). So the question is ... it seems that for now, Foundation Models and the #Playground macro only run correctly when the canvas or destination is set to “My Mac (AppKit)”?
7
0
531
Jul ’25
Is it allowed for an iOS app to download machine learning model files (e.g., .mlmodel, .onnx) from a separate cloud server?
Hello, I am developing an iOS app that uses machine learning models. To improve accuracy and user experience, I would like to download .mlmodel files (compiled and compressed as zip files) from our own server after the app is installed, and use them for inference within the app. No executable code, scripts, or dynamic libraries will be downloaded—only model data files are used. According to App Store Review Guideline 2.5.2, I understand that apps may not download or execute code which introduces or changes features or functionality. In this case, are compiled and zip-compressed .mlmodel files considered "data" rather than "code", and is it allowed to download and use them in the app? If there are any restrictions or best practices related to this, please let me know. Thank you.
1
0
378
Jul ’25
Image Playground Error: Unable to Generate Images Using externalProvider Style
I’m working on generating images using Image Playground. The code works fine for other styles but fails when using an external provider. I don’t see any other requirements mentioned in the documentation. Has anyone else encountered a similar issue? Here’s the relevant code snippet: https://developer.apple.com/documentation/imageplayground/imageplaygroundstyle/externalprovider?changes=_2 The error message is also not very helpful. It simply states that the creation failed. Note: I have enabled ChatGPT Plus, and the image generation using ChatGPT styles works fine when using the Playground app. do { let creator = try await ImageCreator() let concept = ImagePlaygroundConcept.text("Love") let images = creator.images(for: [concept], style: .externalProvider, limit: 1) for try await image in images { // Handle image break } } catch { // Handle error } I’m using the iOS 26 RC, and when I print creator.availableStyles, it doesn’t display the external Provider. [ImagePlayground.ImagePlaygroundStyle(id: "animation", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "emoji", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "illustration", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "sketch", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "messages-background", _representationInfo: nil)]
1
0
903
Sep ’25
Swift playgrounds (.swiftpm) and CoreML
Hey guys, I've been having difficulties transferring my Xcode project to a Swift playground (.swiftpm) for the Swift Student Challenge. I keep getting these errors as well as none of the views being able to find the model in scope: "TrashDetector 1.mlmodel: No predominant language detected. Set COREML_CODEGEN_LANGUAGE to preferred language." Unexpected duplicate tasks: Target 'TrashQuest' (project 'TrashQuest') has write command with output /Users/kmcph3/Library/Developer/Xcode/DerivedData/TrashQuest-glvzskunedgtakfrdmsxdoplondj/Build/Intermediates.noindex/TrashQuest.build/Debug-iphonesimulator/TrashQuest.build/0a4ef2429d66360920ddb4f16e65e233.sb I've gone through multiple post with these exact problems, but they all seem to be talking about ".playground" files due to the "Resources" folder (mind you I did try exactly what they said). Is there anyone that can help??? (Quick side note, why does it need to be a swiftpm file for the SSC??? Like why can't we just send the zip of our Xcode project??)
2
0
852
Feb ’25
ModelManager received unentitled request. Expected entitlement com.apple.modelmanager.inference
Just tried to write a very simple test of using foundation models, but it gave me the error like this "ModelManager received unentitled request. Expected entitlement com.apple.modelmanager.inference establishment of session failed with Missing entitlement: com.apple.modelmanager.inference" The simple code is listed below: let session: LanguageModelSession = LanguageModelSession() let response = try? await session.respond(to: "What is the capital of France?") print("Response: (response)") So what's the problem of this one?
2
0
256
Jul ’25
When applied to a nested struct, @Generable macro results in infinite nested response from Foundation Model
When the @Generable is applied toward a Swift struct declared within another struct, and when said nested struct is defined as the type of one of the properties of another @Generable type, which is in turn defined as the output format of Foundation Model session, Foundation Model can stuck in a loop trying to create a infinitely nested response, until the context window limit exceeded error is triggered. I have filed feedback FB19987191 with a demo project. Is this expected behavior?
1
0
571
Sep ’25
Ways I can leverage AI when the user asks Siri, "What does this word mean"
I'm the creator of an app that helps users learn Arabic. Inside of the app users can save words, engage in lessons specific to certain grammar concepts etc. I'm looking for a way for Siri to 'suggest' my app when the user asks to define any Arabic words. There are other questions that I would like for Siri to suggest my app for, but I figure that's a good start. What framework am I looking for here? I think AppItents? I remember I played with it for a bit last year but didn't get far. Any suggestions would be great. Would the new Foundations model be any help here?
2
0
135
Jun ’25
Foundation Model Always modelNotReady
I'm testing Foundation Model on my iPad Pro (5th gen) iOS 26. Up until late this morning, I can no longer load the SystemLanguageModel.default. I'm not doing anything interesting, something as basic as this is only going to unavailable, specifically I get unavailable reason: modelNotReady. let model = SystemLanguageModel.default ... switch model.availability { case .available: print("LM available") case .unavailable(let reason): print("unavailable reason: ", String(describing: reason)) } I also ran the FoundationModelsTripPlanner app, same thing. It was working yesterday, I have not modified that project either. Why is the Model not ready? How do I fix this? Yes, I tried restarting both my laptop and iPad, no luck.
3
0
278
Jul ’25
Dynamically Create Tool Argument Type
According to the Tool documentation, the arguments to the tool are specified as a static struct type T, which is given to tool.call(argument: T) However, if the arguments are not known until runtime, is it possible to still create a Tool object with the proper parameters? Let's say a JSON-style dictionary is passed into the Tool init function to specify T, is this achievable?
1
0
471
Jul ’25
Unavailable error is wrong?
This is my code: witch SystemLanguageModel.default.availability { case .available: ContentView() .popover(isPresented: $showSettings) { SettingsView().presentationCompactAdaptation(.popover) } case .unavailable(.modelNotReady): ContentUnavailableView("Apple Intelligence is unavailable", systemImage: "apple.intelligence.badge.xmark", description: Text("Please come back later.")) case .unavailable(.appleIntelligenceNotEnabled): ContentUnavailableView("Apple Intelligence is unavailable", systemImage: "apple.intelligence.badge.xmark", description: Text("Please turn on Apple Intelligence.")) case .unavailable(.deviceNotEligible): ContentUnavailableView("Apple Intelligence is unavailable", systemImage: "apple.intelligence.badge.xmark", description: Text("This device is not eligible for Apple Intelligence.")) case .unavailable: ContentUnavailableView("Apple Intelligence is unavailable", systemImage: "apple.intelligence.badge.xmark") } When I switch off Apple Intelligence, I expected "Please turn on Apple Intelligence.", but instead I get "Please come back later." This seems to be wrong error?
1
0
280
Jul ’25
Can I give additional context to Foundation Models?
I'm interested in using Foundation Models to act as an AI support agent for our extensive in-app documentation. We have many pages of in-app documents, which the user can currently search, but it would be great to use Foundation Models to let the user get answers to arbitrary questions. Is this possible with the current version of Foundation Models? It seems like the way to add new context to the model is with the instructions parameter on LanguageModelSession. As I understand it, the combined instructions and prompt need to consume less than 4096 tokens. That definitely wouldn't be enough for the amount of documentation I want the agent to be able to refer to. Is there another way of doing this, maybe as a series of recursive queries? If there is a solution based on multiple queries, should I expect this to be fast enough for interactive use?
4
0
335
Jul ’25
TAMM toolkit v0.2.0 is for base model older than base model in macOS 26 beta 4
Problem: We trained a LoRA adapter for Apple's FoundationModels framework using their TAMM (Training Adapter for Model Modification) toolkit v0.2.0 on macOS 26 beta 4. The adapter trains successfully but fails to load with: "Adapter is not compatible with the current system base model." TAMM 2.0 contains export/constants.py with: BASE_SIGNATURE = "9799725ff8e851184037110b422d891ad3b92ec1" Findings: Adapter Export Process: In export_fmadapter.py def write_metadata(...): self_dict[MetadataKeys.BASE_SIGNATURE] = BASE_SIGNATURE # Hardcoded value The Compatibility Check: - When loading an adapter, Apple's system compares the adapter's baseModelSignature with the current system model - If they don't match: compatibleAdapterNotFound error - The error doesn't reveal the expected signature Questions: - How is BASE_SIGNATURE derived from the base model? - Is it SHA-1 of base-model.pt or some other computation? - Can we compute the correct signature for beta 4? - Or do we need Apple to release TAMM v0.3.0 with updated signature?
1
0
572
Aug ’25