Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Will Apple Intelligence Support Third-Party LLMs or Custom AI Agent Integrations?
Hi everyone, I’m an AI engineer working on autonomous AI agents and exploring ways to integrate them into the Apple ecosystem, especially via Siri and Apple Intelligence. I was impressed by Apple’s integration of ChatGPT and its privacy-first design, but I’m curious to know: • Are there plans to support third-party LLMs? • Could Siri or Apple Intelligence call external AI agents or allow extensions to plug in alternative models for reasoning, scheduling, or proactive suggestions? I’m particularly interested in building event-driven, voice-triggered workflows where Apple Intelligence could act as a front-end for more complex autonomous systems (possibly local or cloud-based). This kind of extensibility would open up incredible opportunities for personalized, privacy-friendly use cases — while aligning with Apple’s system architecture. Is anything like this on the roadmap? Or is there a suggested way to prototype such integrations today? Thanks in advance for any thoughts or pointers!
4
0
528
May ’25
Can I give additional context to Foundation Models?
I'm interested in using Foundation Models to act as an AI support agent for our extensive in-app documentation. We have many pages of in-app documents, which the user can currently search, but it would be great to use Foundation Models to let the user get answers to arbitrary questions. Is this possible with the current version of Foundation Models? It seems like the way to add new context to the model is with the instructions parameter on LanguageModelSession. As I understand it, the combined instructions and prompt need to consume less than 4096 tokens. That definitely wouldn't be enough for the amount of documentation I want the agent to be able to refer to. Is there another way of doing this, maybe as a series of recursive queries? If there is a solution based on multiple queries, should I expect this to be fast enough for interactive use?
4
0
364
Jul ’25
LanguageModelStream and collecting the final output
I have a Generable type with many elements. I am using a stream() to incrementally process the output (Generable.PartiallyGenerated?) content. At the end, I want to pass the final version (not partially generated) to another function. I cannot seem to find a good way to convert from a MyGenerable.PartiallyGenerated to a MyGenerable. Am I missing some functionality in the APIs?
4
0
598
Jul ’25
Crashed: AXSpeech
Hello, My app is crashing a lot with this issue. I can't reproduce the problem but I can see it occurs at the user's devices. The Crashlytics report shows the following lines:Crashed: AXSpeech 0 libsystem_pthread.dylib 0x1824386bc pthread_mutex_lock$VARIANT$mp + 278 1 CoreFoundation 0x1826d3a34 CFRunLoopSourceSignal + 68 2 Foundation 0x18319ec90 performQueueDequeue + 468 3 Foundation 0x18325a020 __NSThreadPerformPerform + 136 4 CoreFoundation 0x1827b7404 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 24 5 CoreFoundation 0x1827b6ce0 __CFRunLoopDoSources0 + 456 6 CoreFoundation 0x1827b479c __CFRunLoopRun + 1204 7 CoreFoundation 0x1826d4da8 CFRunLoopRunSpecific + 552 8 Foundation 0x183149674 -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 304 9 libAXSpeechManager.dylib 0x192852830 -[AXSpeechThread main] + 284 10 Foundation 0x183259efc __NSThread__start__ + 1040 11 libsystem_pthread.dylib 0x182435220 _pthread_body + 272 12 libsystem_pthread.dylib 0x182435110 _pthread_body + 290 13 libsystem_pthread.dylib 0x182433b10 thread_start + 4The crash occurs in different threads (never at main thread)It is driving me crazy... Can anybody help me?Thanks a lot
5
0
3.5k
14h
Crashed: AXSpeech EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x000056f023efbeb0
Application is getting Crashed: AXSpeech EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x000056f023efbeb0 Crashed: AXSpeech 0 libobjc.A.dylib 0x4820 objc_msgSend + 32 1 libsystem_trace.dylib 0x6c34 _os_log_fmt_flatten_object + 116 2 libsystem_trace.dylib 0x5344 _os_log_impl_flatten_and_send + 1884 3 libsystem_trace.dylib 0x4bd0 _os_log + 152 4 libsystem_trace.dylib 0x9c48 _os_log_error_impl + 24 5 TextToSpeech 0xd0a8c _pcre2_xclass_8 6 TextToSpeech 0x3bc04 TTSSpeechUnitTestingMode 7 TextToSpeech 0x3f128 TTSSpeechUnitTestingMode 8 AXCoreUtilities 0xad38 -[NSArray(AXExtras) ax_flatMappedArrayUsingBlock:] + 204 9 TextToSpeech 0x3eb18 TTSSpeechUnitTestingMode 10 TextToSpeech 0x3c948 TTSSpeechUnitTestingMode 11 TextToSpeech 0x48824 AXAVSpeechSynthesisVoiceFromTTSSpeechVoice 12 TextToSpeech 0x49804 AXAVSpeechSynthesisVoiceFromTTSSpeechVoice 13 Foundation 0xf6064 __NSThreadPerformPerform + 264 14 CoreFoundation 0x37acc CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION + 28 15 CoreFoundation 0x36d48 __CFRunLoopDoSource0 + 176 16 CoreFoundation 0x354fc __CFRunLoopDoSources0 + 244 17 CoreFoundation 0x34238 __CFRunLoopRun + 828 18 CoreFoundation 0x33e18 CFRunLoopRunSpecific + 608 19 Foundation 0x2d4cc -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 212 20 TextToSpeech 0x24b88 TTSCFAttributedStringCreateStringByBracketingAttributeWithString 21 Foundation 0xb3154 NSThread__start + 732 com.livingMedia.AajTakiPhone_issue_3ceba855a8ad2d1af83655803dc13f70_crash_session_9081fa41ced440ae9a57c22cb432f312_DNE_0_v2_stacktrace.txt 22 libsystem_pthread.dylib 0x24d4 _pthread_start + 136 23 libsystem_pthread.dylib 0x1a10 thread_start + 8
4
1
1.3k
14h
Missing module 'coremltools.libmilstoragepython'
Hello! I'm following the Foundation Models adapter training guide (https://developer.apple.com/apple-intelligence/foundation-models-adapter/) on my NVIDIA DGX Spark box. I'm able to train on my own data but the example notebook fails when I try to export the artifact as an fmadapter. I get the following error for the code block I'm trying to run. I haven't touched any of the code in the export folder. I tried exporting it on my Mac too and got the same error as well (given below). Would appreciate some more clarity around this. Thank you. Code Block: from export.export_fmadapter import Metadata, export_fmadapter metadata = Metadata( author="3P developer", description="An adapter that writes play scripts.", ) export_fmadapter( output_dir="./", adapter_name="myPlaywritingAdapter", metadata=metadata, checkpoint="adapter-final.pt", draft_checkpoint="draft-model-final.pt", ) Error: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In[10], line 1 ----> 1 from export.export_fmadapter import Metadata, export_fmadapter 3 metadata = Metadata( 4 author="3P developer", 5 description="An adapter that writes play scripts.", 6 ) 8 export_fmadapter( 9 output_dir="./", 10 adapter_name="myPlaywritingAdapter", (...) 13 draft_checkpoint="draft-model-final.pt", 14 ) File /workspace/export/export_fmadapter.py:11 8 from typing import Any 10 from .constants import BASE_SIGNATURE, MIL_PATH ---> 11 from .export_utils import AdapterConverter, AdapterSpec, DraftModelConverter, camelize 13 logger = logging.getLogger(__name__) 16 class MetadataKeys(enum.StrEnum): File /workspace/export/export_utils.py:15 13 import torch 14 import yaml ---> 15 from coremltools.libmilstoragepython import _BlobStorageWriter as BlobWriter 16 from coremltools.models.neural_network.quantization_utils import _get_kmeans_lookup_table_and_weight 17 from coremltools.optimize._utils import LutParams ModuleNotFoundError: No module named 'coremltools.libmilstoragepython'
4
0
767
Oct ’25
All generations in #Playground macro are throwing "unsafe" Generation Errors
I'm using Xcode 26 Beta 5 and get errors on any generation I try, however harmless, when wrapped in the #Playground macro. #Playground { let session = LanguageModelSession() let topic = "pandas" let prompt = "Write a safe and respectful story about (topic)." let response = try await session.respond(to: prompt) Not seeing any issues on simulator or device. Anyone else seeing this or have any ideas? Thanks for any help! Version 26.0 beta 5 (17A5295f) macOS 26.0 Beta (25A5316i)
4
0
154
Aug ’25
Translation framework use in Swift 6
I’m trying to integrate Apple’s Translation framework in a Swift 6 project with Approachable Concurrency enabled. I’m following the code here: https://developer.apple.com/documentation/translation/translating-text-within-your-app#Offer-a-custom-translation And, specifically, inside the following code .translationTask(configuration) { session in do { // Use the session the task provides to translate the text. let response = try await session.translate(sourceText) // Update the view with the translated result. targetText = response.targetText } catch { // Handle any errors. } } On the try await session.translate(…) line, the compiler complains that “Sending ‘session’ risks causing data races”. Extended error message: Sending main actor-isolated 'session' to @concurrent instance method 'translate' risks causing data races between @concurrent and main actor-isolated uses I’ve downloaded Apple’s sample code (at the top of linked webpage), it compiles fine as-is on Xcode 26.4, but fails with the same error as soon as I switch the Swift Language Mode to Swift 6 in the project. How can I fix this?
4
0
261
2w
Error in Xcode console
Lately I am getting this error. GenerativeModelsAvailability.Parameters: Initialized with invalid language code: en-GB. Expected to receive two-letter ISO 639 code. e.g. 'zh' or 'en'. Falling back to: en Does anyone know what this is and how it can be resolved. The error does not crash the app
4
2
1.7k
Feb ’26
Avoid hallucinations and information from trainning data
Hi For certain tasks, such as qualitative analysis or tagging, it is advisable to provide the AI with the option to respond with a joker / wild card answer when it encounters difficulties in tagging or scoring. For instance, you can include this slot in the prompt as follows: output must be "not data to score" when there isn't information to score. In the absence of these types of slots, AI trends to provide a solution even when there is insufficient information. Foundations Models are told to be prompted with simple prompts. I wonder: Is recommended keep this slot though adds verbose complexity? Is the best place the comment of a guided attribute? other tips? Another use case is when you want the AI to be tied to the information provided in the prompt and not take information from its data set. What is the best approach to this purpose? Thanks in advance for any suggestion.
4
0
853
Oct ’25
Using RAG on local documents from Foundation Model
I am watching a few WWDC sessions on Foundation Model and its usage and it looks pretty cool. I was wondering if it is possible to perform RAG on the user documents on the devices and entuallly on iCloud... Let's say I have a lot of pages documents about me and I want the Foundation model to access those information on the documents to answer questions about me that can be retrieved from the documents. How can this be done ? Thanks
4
2
489
Jun ’25
Python 3.13
Hello, Are there any plans to compile a python 3.13 version of tensorflow-metal? Just got my new Mac mini and the automatically installed version of python installed by brew is python 3.13 and while if I was in a hurry, I could manage to get python 3.12 installed and use the corresponding tensorflow-metal version but I'm not in a hurry. Many thanks, Alan
4
5
1.8k
Dec ’25
Model Rate Limits?
Trying the Foundation Model framework and when I try to run several sessions in a loop, I'm getting a thrown error that I'm hitting a rate limit. Are these rate limits documented? What's the best practice here? I'm trying to run the models against new content downloaded from a web service where I might get ~200 items in a given download. They're relatively small but there can be that many that want to be processed in a loop.
4
1
661
Jun ’25
Does Generable support recursive schemas?
I've run into an issue with a small Foundation Models test with Generable. I'm getting a strange error message with this Generable. I was able to get simpler ones to work. Is this because the Generable is recursive with a property of [HTMLDiv]? The error message is: FoundationModels/SchemaAugmentor.swift:209: Fatal error: 'try!' expression unexpectedly raised an error: FoundationModels.GenerationSchema.SchemaError.undefinedReferences(schema: Optional("SafeResponse<HTMLDiv>"), references: ["HTMLDiv"], context: FoundationModels.GenerationSchema.SchemaError.Context(debugDescription: "Undefined types: [HTMLDiv]", underlyingErrors: [])) The code is: import FoundationModels import Playgrounds @Generable struct HTMLDiv { @Guide(description: "Optional named ID, useful for nicknames") var id: String? = nil @Guide(description: "Optional visible HTML text") var textContent: String? = nil @Guide(description: "Any child elements", .count(0...10)) var children: [HTMLDiv] = [] static var sample: HTMLDiv { HTMLDiv( id: "profileToolbar", children: [ HTMLDiv(textContent: "Log in"), HTMLDiv(textContent: "Sign up"), ] ) } } #Playground { do { let session = LanguageModelSession { "Your job is to generate simple HTML markup" "Here is an example response to the prompt: 'Make a profile toolbar':" HTMLDiv.sample } let response = try await session.respond( to: "Make a sign up form", generating: HTMLDiv.self ) print(response.content) } catch { print(error) } }
4
0
175
Jul ’25
Massive CoreML latency spike on live AVFoundation camera feed vs. offline inference (CPU+ANE)
Hello, I’m experiencing a severe performance degradation when running CoreML models on a live AVFoundation video feed compared to offline or synthetic inference. This happens across multiple models I've converted (including SCI, RTMPose, and RTMW) and affects multiple devices. The Environment OS: macOS 26.3, iOS 26.3, iPadOS 26.3 Hardware: Mac14,6 (M2 Max), iPad Pro 11 M1, iPhone 13 mini Compute Units: cpuAndNeuralEngine The Numbers When testing my SCI_output_image_int8.mlpackage model, the inference timings are drastically different: Synthetic/Offline Inference: ~1.34 ms Live Camera Inference: ~15.96 ms Preprocessing is completely ruled out as the bottleneck. My profiling shows total preprocessing (nearest-neighbor resize + feature provider creation) takes only ~0.4 ms in camera mode. Furthermore, no frames are being dropped. What I've Tried I am building a latency-critical app and have implemented almost every recommended optimization to try and fix this, but the camera-feed penalty remains: Matched the AVFoundation camera output format exactly to the model input (640x480 at 30/60fps). Used IOSurface-backed pixel buffers for everything (camera output, synthetic buffer, and resize buffer). Enabled outputBackings. Loaded the model once and reused it for all predictions. Configured MLModelConfiguration with reshapeFrequency = .frequent and specializationStrategy = .fastPrediction. Wrapped inference in ProcessInfo.processInfo.beginActivity(options: .latencyCritical, reason: "CoreML_Inference"). Set DispatchQueue to qos: .userInteractive. Disabled the idle timer and enabled iOS Game Mode. Exported models using coremltools 9.0 (deployment target iOS 26) with ImageType inputs/outputs and INT8 quantization. Reproduction To completely rule out UI or rendering overhead, I wrote a standalone Swift CLI script that isolates the AVFoundation and CoreML pipeline. The script clearly demonstrates the ~15ms latency on live camera frames versus the ~1ms latency on synthetic buffers. (I have attached camera_coreml_benchmark.swift and coreml model (very light low light enghancement model) to this repo on github https://github.com/pzoltowski/apple-coreml-camera-latency-repro). My Question: Is this massive overhead expected behavior for AVFoundation + Core ML on live feeds, or is this a framework/runtime bug? If expected, what is the Apple-recommended pattern to bypass this camera-only inference slowdown? One think found interesting when running in debug model was faster (not as fast as in performance benchmark but faster than 16ms. Also somehow if I did some dummy calculation on on different DispatchQueue also seems like model got slightly faster. So maybe its related to ANE Power State issues (Jitter/SoC Wake) and going to fast to sleep and taking a long time to wakeup? Doing dummy calculation in background thought is probably not a solution. Thanks in advance for any insights!
4
0
509
2w
Support for Content Exclusion Files in Apple Intelligence
I am writing to inquire about content exclusion capabilities within Apple Intelligence, particularly regarding the use of configuration files such as .aiignore or .aiexclude—similar to what exists in other AI-assisted coding tools. These mechanisms are highly valuable in managing what content AI systems can access, especially in environments that involve sensitive code or proprietary frameworks. I would appreciate it if anyone could clarify whether Apple Intelligence currently supports any exclusion configuration for AI-assisted features. If so, could you kindly provide documentation or guidance on how developers can implement these controls? If not, Is there any plan to include such feature in future updates?
4
0
892
Nov ’25