I am trying to benchmark and see if the Qwen3 1.7B model can run in an iPhone SE 3 [4 GB RAM].
My core problem is - Even with weight quantization the SE 3 is not able to load into memory.
What I've tried:
I am converting a Torch model to the Core ML format using coremltools. I have tried the following combinations of quantization and context length
8 bit + 1024
8 bit + 2048
4 bit + 1024
4 bit + 2048
All the above quantizations are done with dynamic shape with the default being [1,1] in the hope that the whole context length does not get allocated in memory
The 4-bit model is approximately 865MB on disk
The 8-bit model is approximately 1.7 GB on disk
During load:
With the int4 quantization the memory spikes during intitial load a lot. Could this be because many operations are converted to int8 or fp16 as core ML does not perform operations natively on int4?
With int8 on the profiler the memory does not go above 2 GB (only 900 MB) but it is still not able to load as it shows the following error. 2GB is the limit where jetsam kills the app for the iPhone SE 3
E5RT: Error(s) occurred compiling MIL to BNNS graph:
[CreateBnnsGraphProgramFromMIL]: BNNS Graph Compile:
failed to preallocate file with error: No space left on device
for path: /var/mobile/Containers/Data/Application/
5B8BB7D2-06A6-4BAE-A042-407B6D805E7C/Library/Caches
/com.tss.qwen3-coreml/
com.apple.e5rt.e5bundlecache/
23A341/<long key>.tmp.12586_4362093968.bundle/
H14.bundle/main/main_bnns/bnns_program.bnnsir
Some online sources have suggested activation quantization but I am unsure if that will have any impact on loading [as the spike is during load and not inference]
The model spec also suggests that there is no dequantization happening (for e.g from 4 bit -> fp16)
So I had couple of queries:
Has anyone faced similar issues?
What could be the reasons for the temporary memory spike during LOAD
What are approaches that can be adopted to deal with this issue?
Any help would be greatly appreciated. Thank you.
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
Originally, I set my iOS deployment target to 18.1, but now that I'm integrating Foundational Models, I set it to iOS 26.0. Is this ok?
The deployment target for my app was set to iOS 18.1 originally, but now that I'm using Foundational Models framework, it has been set to iOS 26.0. Is this ok?
I am trying to test FoundationModels in a Swift Playground in Xcode 26.2, macOS 26.3, and am running into an issue. The following simple code generates an error:
import FoundationModels
@Generable
struct Specifications {
@Guide(description: "Search for color")
var color: String
}
I see the following error message in the console:
error: AIPlayground.playground:4:8: external macro implementation type 'FoundationModelsMacros.GenerableMacro' could not be found for macro 'Generable(description:)'; plugin for module 'FoundationModelsMacros' not found
The Xcode editor does not appear to recognize the @Generable or @Guide macros, despite importing FoundationModels. What step/setting am I missing?
I have built a MAC-OS machine intelligence application that uses Apple Intelligence. A part of the application is to preprocess text. For longer text content I have implemented chunking to get around the token limit. However the application performance is now limited by the fact that Apple Intelligence is sequential in operation. This has a large impact on the application performance.
Is there any approach to operate Apple Intelligence in a parallel mode or even a streaming interface. As Apple Intelligence has Private Cloud Services I was hoping to be able to send multiple chunks in parallel as that would significantly improve performance.
Any suggestions would be welcome. This could also be considered a request for a future enhancement.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Things I did:
created an Intents Extension target
added "Supported Intents" to both my main app target and the intent extension, with "INAddTasksIntent" and "INCreateNoteIntent"
created the AppIntentVocabulary in my main app target
created the handlers in the code in the Intents Extension target
class AddTaskIntentHandler: INExtension, INAddTasksIntentHandling {
func resolveTaskTitles(for intent: INAddTasksIntent) async -> [INSpeakableStringResolutionResult] {
if let taskTitles = intent.taskTitles {
return taskTitles.map { INSpeakableStringResolutionResult.success(with: $0) }
} else {
return [INSpeakableStringResolutionResult.needsValue()]
}
}
func handle(intent: INAddTasksIntent) async -> INAddTasksIntentResponse {
// my code to handle this...
let response = INAddTasksIntentResponse(code: .success, userActivity: nil)
response.addedTasks = tasksCreated.map {
INTask(
title: INSpeakableString(spokenPhrase: $0.name),
status: .notCompleted,
taskType: .completable,
spatialEventTrigger: nil,
temporalEventTrigger: intent.temporalEventTrigger,
createdDateComponents: DateHelper.localCalendar().dateComponents([.year, .month, .day, .minute, .hour], from: Date.now),
modifiedDateComponents: nil,
identifier: $0.id
)
}
return response
}
}
class AddItemIntentHandler: INExtension, INCreateNoteIntentHandling {
func resolveTitle(for intent: INCreateNoteIntent) async -> INSpeakableStringResolutionResult {
if let title = intent.title {
return INSpeakableStringResolutionResult.success(with: title)
} else {
return INSpeakableStringResolutionResult.needsValue()
}
}
func resolveGroupName(for intent: INCreateNoteIntent) async -> INSpeakableStringResolutionResult {
if let groupName = intent.groupName {
return INSpeakableStringResolutionResult.success(with: groupName)
} else {
return INSpeakableStringResolutionResult.needsValue()
}
}
func handle(intent: INCreateNoteIntent) async -> INCreateNoteIntentResponse {
do {
// my code for handling this...
let response = INCreateNoteIntentResponse(code: .success, userActivity: nil)
response.createdNote = INNote(
title: INSpeakableString(spokenPhrase: itemName),
contents: itemNote.map { [INTextNoteContent(text: $0)] } ?? [],
groupName: INSpeakableString(spokenPhrase: list.name),
createdDateComponents: DateHelper.localCalendar().dateComponents([.day, .month, .year, .hour, .minute], from: Date.now),
modifiedDateComponents: nil,
identifier: newItem.id
)
return response
} catch {
return INCreateNoteIntentResponse(code: .failure, userActivity: nil)
}
}
}
uninstalled my app
restarted my physical device and simulator
Yet, when I say "Remind me to buy dog food in Index" (Index is the name of my app), as stated in the examples of INAddTasksIntent, Siri proceeds to say that a list named "Index" doesn't exist in apple Reminders app, instead of processing the request in my app.
Am I missing something?
Whats to code to warm it up once? Saw this in a developer video but cannot find it. Prevent cold run within an application.
Thank you in advance!
Environment:
macOS 26.2 (Tahoe)
Xcode 16.3
Apple Silicon (M4)
Sandboxed Mac App Store app
Description:
Repeated use of VNRecognizeTextRequest causes permanent memory growth in the host process. The physical footprint increases by approximately 3-15 MB per OCR call and never returns to baseline, even after all references to the request, handler, observations, and image are released.
`
private func selectAndProcessImage() {
let panel = NSOpenPanel()
panel.allowedContentTypes = [.image]
panel.allowsMultipleSelection = false
panel.canChooseDirectories = false
panel.message = "Select an image for OCR processing"
guard panel.runModal() == .OK, let url = panel.url else { return }
selectedImageURL = url
isProcessing = true
recognizedText = "Processing..."
// Run OCR on a background thread to keep UI responsive
let workItem = DispatchWorkItem {
let result = performOCR(on: url)
DispatchQueue.main.async {
recognizedText = result
isProcessing = false
}
}
DispatchQueue.global(qos: .userInitiated).async(execute: workItem)
}
private func performOCR(on url: URL) -> String {
// Wrap EVERYTHING in autoreleasepool so all ObjC objects are drained immediately
let resultText: String = autoreleasepool {
// Load image and convert to CVPixelBuffer for explicit memory control
guard let imageData = try? Data(contentsOf: url) else {
return "Error: Could not read image file."
}
guard let nsImage = NSImage(data: imageData) else {
return "Error: Could not create image from file data."
}
guard let cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil) else {
return "Error: Could not create CGImage."
}
let width = cgImage.width
let height = cgImage.height
// Create a CVPixelBuffer from the CGImage
var pixelBuffer: CVPixelBuffer?
let attrs: [String: Any] = [
kCVPixelBufferCGImageCompatibilityKey as String: true,
kCVPixelBufferCGBitmapContextCompatibilityKey as String: true
]
let status = CVPixelBufferCreate(
kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_32ARGB,
attrs as CFDictionary,
&pixelBuffer
)
guard status == kCVReturnSuccess, let buffer = pixelBuffer else {
return "Error: Could not create CVPixelBuffer (status: \(status))."
}
// Draw the CGImage into the pixel buffer
CVPixelBufferLockBaseAddress(buffer, [])
guard let context = CGContext(
data: CVPixelBufferGetBaseAddress(buffer),
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(buffer),
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue
) else {
CVPixelBufferUnlockBaseAddress(buffer, [])
return "Error: Could not create CGContext for pixel buffer."
}
context.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))
CVPixelBufferUnlockBaseAddress(buffer, [])
// Run OCR
let requestHandler = VNImageRequestHandler(cvPixelBuffer: buffer, options: [:])
let request = VNRecognizeTextRequest()
request.recognitionLevel = .accurate
request.usesLanguageCorrection = true
do {
try requestHandler.perform([request])
} catch {
return "Error during OCR: \(error.localizedDescription)"
}
guard let observations = request.results, !observations.isEmpty else {
return "No text found in image."
}
let lines = observations.compactMap { observation in
observation.topCandidates(1).first?.string
}
// Explicitly nil out the pixel buffer before the pool drains
pixelBuffer = nil
return lines.joined(separator: "\n")
}
// Everything — Data, NSImage, CGImage, CVPixelBuffer, VN objects — released here
return resultText
}
`
We're in the process of migrating our app's custom intents from the older SiriKit Custom Intents framework to App Intents. The migration has been straightforward for our app-specific actions, and we appreciate the improved discoverability and Apple Intelligence integration that App Intents provides.
However, we also implement SiriKit domain intents for calling and messaging:
INStartCallIntent / INStartCallIntentHandling
INSendMessageIntent / INSendMessageIntentHandling
These require us to maintain an Intents Extension to handle contact resolution and the actual call/message operations.
Our questions:
Is there a planned App Intents equivalent for these SiriKit domains (calling, messaging), or is the Intents Extension approach still the recommended path?
If we want to support phrases like "Call [contact] on [AppName]" or "Send a message to [contact] on [AppName]" with Apple Intelligence integration, is there any way to achieve this with App Intents today?
Are there any WWDC sessions or documentation we may have missed that addresses the migration path for SiriKit domain intents?
What we've reviewed:
"Migrate custom intents to App Intents" Tech Talk
"Bring your app's core features to users with App Intents" (WWDC24)
App Intents documentation
These resources clearly explain custom intent migration but don't seem to address the system domain intents.
Our current understanding:
Based on our research, it appears SiriKit domain intents should remain on the older framework, while custom intents should migrate to App Intents. We'd like to confirm this is correct and understand if there's a future direction we should be planning for.
Thank you!
Hi all, I spent the last few months developing an MLX/Ollama local AI Benchmarking suite for Apple Silicon, written in pure Swift and signed with an Apple Developer Certificate, open source, GPL, and free. I would love some feedback to continue development. It is the only benchmarking suite I know of that supports live power metrics and MLX natively, as well as quick exports for benchmark results, and an arena mode, Model A vs B with history. I really want this project to succeed, and have widespread use, so getting 75 stars on the github repo makes it eligible for Homebrew/Cask distribution.
Github Repo
Topic:
Machine Learning & AI
SubTopic:
Core ML
I get the following error when running this command in a Jupyter notebook:
v = tf.Variable(initial_value=tf.random.normal(shape=(3, 1)))
v[0, 0].assign(3.)
Environment:
python == 3.11.14
tensorflow==2.19.1
tensorflow-metal==1.2.0
{
"name": "InvalidArgumentError",
"message": "Cannot assign a device for operation ResourceStridedSliceAssign: Could not satisfy explicit device specification '/job:localhost/replica:0/task:0/device:GPU:0' because no supported kernel for GPU devices is available.\nColocation Debug Info:\nColocation group had the following types and supported devices: \nRoot Member(assigned_device_name_index_=1 requested_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' assigned_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' resource_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]\nResourceStridedSliceAssign: CPU \n_Arg: GPU CPU \n\nColocation members, user-requested devices, and framework assigned devices, if any:\n ref (_Arg) framework assigned device=/job:localhost/replica:0/task:0/device:GPU:0\n ResourceStridedSliceAssign (ResourceStridedSliceAssign) /job:localhost/replica:0/task:0/device:GPU:0\n\nOp: ResourceStridedSliceAssign\n
[...]
[[{{node ResourceStridedSliceAssign}}]] [Op:ResourceStridedSliceAssign] name: strided_slice/_assign"
}
It seems like the ResourceStridedSliceAssign operation is not implemented for the GPU
Hi
From https://developer.apple.com/metal/jax/
I checked all active workflows on https://github.com/jax-ml/jax
and any open issues with tags Metal
and seems in DEC 2025 the Jax maintainers have closed all issues citing No active development on Jax-metal and the project seems dead.
We need to know how can we leverage Apple silicon for accelerated projects using popular academia library and tools .
Is the JAX project still going to be supported or Apple has plans to bring something of tis own that might be platform agnostic .
Thanks
Topic:
Machine Learning & AI
SubTopic:
Create ML
A foundation models bug I keep running into when in the preview phase of the testing. The error never seems to occur or break the app when I am testing on the simulator or on a device but sometimes I am running into this error when in a longer session while being in preview.
The error breaks the preview and crashes it and the waring on it is labeled as : "Assert in LanguageModelFeedback.swift"
This is something I keep running into, where I have been using foundation models for my project
Hello All,
I’m working on a computer-vision–heavy iOS application that uses the camera, LiDAR depth maps, and semantic segmentation to reason about the environment (object identification, localization and measurement - not just visualization).
Current architecture
I initially built the image pipeline around CIImage as a unifying abstraction. It seemed like a good idea because:
CIImage integrates cleanly with Vision, ARKit, AVFoundation, Metal, Core Graphics, etc.
It provides a rich set of out-of-the-box transforms and filters.
It is immutable and thread-safe, which significantly simplified concurrency in a multi-queue pipeline.
The LiDAR depth maps, semantic segmentation masks, etc. were treated as CIImages, with conversion to CVPixelBuffer or MTLTexture only at the edges when required.
Problem
I’ve run into cases where Core Image transformations do not preserve numeric fidelity for non-visual data.
Example:
Rendering a CIImage-backed segmentation mask into a larger CVPixelBuffer can cause label values to change in predictable but incorrect ways.
This occurs even when:
using nearest-neighbor sampling
disabling color management (workingColorSpace / outputColorSpace = NSNull)
applying identity or simple affine transforms
I’ve confirmed via controlled tests that:
Metal → CVPixelBuffer paths preserve values correctly
CIImage → CVPixelBuffer paths can introduce value changes when resampling or expanding the render target
This makes CIImage unsafe as a source of numeric truth for segmentation masks and depth-based logic, even though it works well for visualization, and I should have realized this much sooner.
Direction I’m considering
I’m now considering refactoring toward more intent-based abstractions instead of a single image type, for example:
Visual images: CIImage (camera frames, overlays, debugging, UI)
Scalar fields: depth / confidence maps backed by CVPixelBuffer + Metal
Label maps: segmentation masks backed by integer-preserving buffers (no interpolation, no transforms)
In this model, CIImage would still be used extensively — but primarily for visualization and perceptual processing, not as the container for numerically sensitive data.
Thread safety concern
One of the original advantages of CIImage was that it is thread-safe by design, and that was my biggest incentive.
For CVPixelBuffer / MTLTexture–backed data, I’m considering enforcing thread safety explicitly via:
Swift Concurrency (actor-owned data, explicit ownership)
Questions
For those may have experience with CV / AR / imaging-heavy iOS apps, I was hoping to know the following:
Is this separation of image intent (visual vs numeric vs categorical) a reasonable architectural direction?
Do you generally keep CIImage at the heart of your pipeline, or push it to the edges (visualization only)?
How do you manage thread safety and ownership when working heavily with CVPixelBuffer and Metal? Using actor-based abstractions, GCD, or adhoc?
Are there any best practices or gotchas around using Core Image with depth maps or segmentation masks that I should be aware of?
I’d really appreciate any guidance or experience-based advice. I suspect I’ve hit a boundary of Core Image’s design, and I’m trying to refactor in a way that doesn't involve too much immediate tech debt, remains robust and maintainable long-term.
Thank you in advance!
We are trying to implement a custom encryption scheme for our Core ML models. Our goal is to bundle encrypted models, decrypt them into memory at runtime, and instantiate the MLModel without the unencrypted model file ever touching the disk.
We have looked into the native apple encryption described here https://developer.apple.com/documentation/coreml/encrypting-a-model-in-your-app but it has limitations like not working on intel macs, without SIP, and doesn’t work loading from dylib.
It seems like most of the Core ML APIs require a file path, there is MLModelAsset APIs but I think they just write a modelc back to disk when compiling but can’t find any information confirming that (also concerned that this seems to be an older API, and means we need to compile at runtime).
I am aware that the native encryption will be much more secure but would like not to have the models in readable text on disk.
Does anyone know if this is possible or any alternatives to try to obfuscate the Core ML models, thanks
Problem: CoreML produces NaN on GPU (works fine on CPU) when running transformer attention with fused QKV projection on macOS 26.2.
Root cause: The common::fuse_transpose_matmul optimization pass triggers a Metal kernel bug when sliced tensors feed into matmul(transpose_y=True).
Workaround:
pipeline = ct.PassPipeline.DEFAULT
pipeline.remove_passes(['common::fuse_transpose_matmul'])
mlmodel = ct.convert(model, ..., pass_pipeline=pipeline)
Minimal repro: https://github.com/imperatormk/coreml-birefnet/blob/main/apple_bug_repro.py
Affected: Any ViT/Swin/transformer with fused QKV attention (BiRefNet, etc.)
Has anyone else hit this? Filed FB report too.
Topic:
Machine Learning & AI
SubTopic:
Core ML
Hi everyone,
I’m exploring ideas around on-device analysis of user typing behavior on iPhone, and I’d love input from others who’ve worked in this area or thought about similar problems.
Conceptually, I’m interested in things like:
High-level sentiment or tone inferred from what a user types over time using ML-models
Identifying a user’s most important or frequent topics over a recent window (e.g., “last week”)
Aggregated insights rather than raw text (privacy-preserving summaries: e.g., your typo-rate by hour to infer highly efficient time slots or "take-a-break" warning typing errors increase)
I understand the significant privacy restrictions around keyboard input on iOS, especially for third-party keyboards and system text fields. I’m not trying to bypass those constraints—rather, I’m curious about what’s realistically possible within Apple’s frameworks and policies. (For instance, Grammarly as a correction tool includes some information about tone)
Questions I’m thinking through:
Are there any recommended approaches for on-device text analysis that don’t rely on capturing raw keystrokes?
Has anyone used NLP / Core ML / Natural Language successfully for similar summarization or sentiment tasks, scoped only to user-explicit input?
For custom keyboards, what kinds of derived or transient signals (if any) are acceptable to process and summarize locally?
Any design patterns that balance usefulness with Apple’s privacy expectations?
If you’ve built something adjacent—journaling, writing analytics, well-being apps, etc.—I’d appreciate hearing what worked, what didn’t, and what Apple reviewers were comfortable with.
Thanks in advance for any ideas or references 🙏
Topic:
Machine Learning & AI
SubTopic:
General
I have a series of shortcuts that I’ve written that use the “Use Model” action to do various things. For example, I have a shortcut “Clipboard Markdown to Notes” that takes the content of the clipboard, creates a new note in Notes, converts the markdown content to rich text, adds it to the note etc.
One key step is to analyze the markdown content with “Use Model” and generate a short descriptive title for the note.
I use the on-device model for this, but sometimes the content and prompt exceed the context window size and the action fails with an error message to that effect.
In that case, I’d like to either repeat the action using the Cloud model, or, if the error was a refusal, to prompt the user to enter a title to use.
I‘ve tried using an IF based on whether the response had any text in it, but that didn’t work. No matter what I’ve tried, I can’t seem to find a way to catch the error from Use Model, determine what the error was, and take appropriate action.
Is there a way to do this?
(And by the way, a huge ”thank you” to whoever had the idea of making AppIntents visible in Shortcuts and adding the Use Model action — has made a huge difference already, and it lets us see what Siri will be able to use as well.)
I am writing an app that parses text and conducts some actions. I don't want to give too much away ;)
However, I am having a huge problem with token sizes. LanguageModelSession will of course give me the on device model 4096 available, but when you go over 4096, my code doesn't seem to be falling back to PCC, or even the system configured ChatGPT. Can anyone assist me with this? For some reason, after reading the docs, it's very unclear how this transition between the three takes place.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Hello everyone,
I’m currently working with the Message Filtering Extension and would really appreciate some clarification around its performance and operational constraints. While the extension is extremely powerful and useful, I’ve found that some important details are either unclear or not well covered in the available documentation.
There are two main areas I’m trying to understand better:
Machine learning model constraints within the extension
In our case, we already have an existing ML model that classifies messages (and are not dependant on Apple's built-in models). We’re evaluating whether and how it can be used inside the extension.
Specifically, I’m trying to understand:
Are there documented limits on the size of an ML model (e.g., maximum bundle size or model file size in MB)?
What are the memory constraints for a model once loaded into memory by the extension?
Under what conditions would the system terminate or “kick out” the extension due to memory or performance pressure?
Message processing timeouts and execution constraints
What is the timeout for processing a single received message?
At what point will the OS stop waiting for the extension’s response and allow the message by default (for example, if the extension does not respond in time)?
Any guidance, official references, or practical experience from Apple engineers or other developers would be greatly appreciated.
Thanks in advance for your help,