After exerting a custom model with nms=True.
In Xcode, the outputs show as:
confidence: MultiArray (0 × 5)
coordinates: MultiArray (0 × 4)
I want to set fixed shapes (e.g., 100 × 5, 100 × 4), but Xcode does not allow editing—the shape fields are locked. The model graph shows both outputs come directly from a NonMaximumSuppression layer.
Is it possible to set fixed output dimensions for NMS outputs in CoreML?
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am trying to benchmark and see if the Qwen3 1.7B model can run in an iPhone SE 3 [4 GB RAM].
My core problem is - Even with weight quantization the SE 3 is not able to load into memory.
What I've tried:
I am converting a Torch model to the Core ML format using coremltools. I have tried the following combinations of quantization and context length
8 bit + 1024
8 bit + 2048
4 bit + 1024
4 bit + 2048
All the above quantizations are done with dynamic shape with the default being [1,1] in the hope that the whole context length does not get allocated in memory
The 4-bit model is approximately 865MB on disk
The 8-bit model is approximately 1.7 GB on disk
During load:
With the int4 quantization the memory spikes during intitial load a lot. Could this be because many operations are converted to int8 or fp16 as core ML does not perform operations natively on int4?
With int8 on the profiler the memory does not go above 2 GB (only 900 MB) but it is still not able to load as it shows the following error. 2GB is the limit where jetsam kills the app for the iPhone SE 3
E5RT: Error(s) occurred compiling MIL to BNNS graph:
[CreateBnnsGraphProgramFromMIL]: BNNS Graph Compile:
failed to preallocate file with error: No space left on device
for path: /var/mobile/Containers/Data/Application/
5B8BB7D2-06A6-4BAE-A042-407B6D805E7C/Library/Caches
/com.tss.qwen3-coreml/
com.apple.e5rt.e5bundlecache/
23A341/<long key>.tmp.12586_4362093968.bundle/
H14.bundle/main/main_bnns/bnns_program.bnnsir
Some online sources have suggested activation quantization but I am unsure if that will have any impact on loading [as the spike is during load and not inference]
The model spec also suggests that there is no dequantization happening (for e.g from 4 bit -> fp16)
So I had couple of queries:
Has anyone faced similar issues?
What could be the reasons for the temporary memory spike during LOAD
What are approaches that can be adopted to deal with this issue?
Any help would be greatly appreciated. Thank you.
I’m trying to integrate Apple’s Translation framework in a Swift 6 project with Approachable Concurrency enabled.
I’m following the code here: https://developer.apple.com/documentation/translation/translating-text-within-your-app#Offer-a-custom-translation
And, specifically, inside the following code
.translationTask(configuration) { session in
do {
// Use the session the task provides to translate the text.
let response = try await session.translate(sourceText)
// Update the view with the translated result.
targetText = response.targetText
} catch {
// Handle any errors.
}
}
On the try await session.translate(…) line, the compiler complains that “Sending ‘session’ risks causing data races”.
Extended error message:
Sending main actor-isolated 'session' to @concurrent instance method 'translate' risks causing data races between @concurrent and main actor-isolated uses
I’ve downloaded Apple’s sample code (at the top of linked webpage), it compiles fine as-is on Xcode 26.4, but fails with the same error as soon as I switch the Swift Language Mode to Swift 6 in the project.
How can I fix this?
Originally, I set my iOS deployment target to 18.1, but now that I'm integrating Foundational Models, I set it to iOS 26.0. Is this ok?
I have built a MAC-OS machine intelligence application that uses Apple Intelligence. A part of the application is to preprocess text. For longer text content I have implemented chunking to get around the token limit. However the application performance is now limited by the fact that Apple Intelligence is sequential in operation. This has a large impact on the application performance.
Is there any approach to operate Apple Intelligence in a parallel mode or even a streaming interface. As Apple Intelligence has Private Cloud Services I was hoping to be able to send multiple chunks in parallel as that would significantly improve performance.
Any suggestions would be welcome. This could also be considered a request for a future enhancement.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
The deployment target for my app was set to iOS 18.1 originally, but now that I'm using Foundational Models framework, it has been set to iOS 26.0. Is this ok?
Optimal Precision
• Current Precision: Mixed (Float32, int32)
• Optimal Precision: Not specified in the image, but typically involves using the most efficient data type for the model's operations to balance speed and memory usage without significant loss of accuracy.
Comparison:
• Mixed Precision: Utilizes both Float32 and int32 to optimize performance. Float32 provides high precision, while int32 reduces memory usage and increases computational speed.
• Optimal Precision: Aimed at achieving the best trade-off between performance and accuracy, potentially using other data types like Float16 (bfloat16) for even greater efficiency in certain hardware environments.
Operation Distribution
• Current Distribution:
• iOS18.mul: 168
• iOS18.transpose: 126
• iOS18.linear: 98
• iOS18.add: 97
• iOS18.sliceByIndex: 96
• iOS18.expandDims: 74
• iOS18.concat: 72
• iOS18.squeeze: 72
• iOS18.reshape: 67
• iOS18.layerNorm: 49
• iOS18.matmul: 48
• iOS18.gelu: 26
• iOS18.softmax: 24
• Split: 24
• conv: 1
• iOS18.conv: 1
Comparison:
• Operation Count: Indicates how frequently each operation is executed. High counts for operations like mul, transpose, and linear suggest these are computationally intensive parts of the model.
• Optimization Opportunities: Reducing the count of high-frequency operations or optimizing their execution can improve performance. This might involve pruning unnecessary operations, optimizing algorithms, or leveraging hardware acceleration.
General Recommendations
• Precision Tuning: Experiment with different precision levels to find the best balance for your specific hardware and accuracy requirements.
• Operation Optimization: Focus on optimizing the most frequent operations. Techniques include using more efficient algorithms, parallelizing computations, or utilizing specialized hardware like GPUs or TPUs.
• Benchmarking: Regularly benchmark the model to assess the impact of changes and ensure that optimizations lead to meaningful performance improvements.
By focusing on these areas, you can potentially enhance the efficiency and performance of your ML model.
Topic:
Machine Learning & AI
SubTopic:
Core ML
After running performance test on my CoreML qwen3 vision, I appreciated the update where results were viewable... ON Mac it mentions Ios18 and im not sure if or how to change..
that bottle neck lead to rebuilding CoreML view.
I woke up and realized I have all the pieces together... and ended up with a swift package working demo of Clawbot..
the current issue is Im trying to use gguf 3b to code it.. I have become well aware that everything I create using the big models, they soon become the default themes /layouts for everyone else simply asking for this or that (I appoligise)
so here I am asking (while looking to schedule meet with dev) if its possible to speak with anyone about th 1000s of Apple Intelligence PCC, Xcode, and vision reports and feedback ive sent , in terms of just general ways I can work more efficiently without the crash...
ive already build a TUI for MLX but the tools for coreML while seems promising are not intuitive, but the vision format instruction was nice to see.
Anyway my question is:
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Hello All,
I’m working on a computer-vision–heavy iOS application that uses the camera, LiDAR depth maps, and semantic segmentation to reason about the environment (object identification, localization and measurement - not just visualization).
Current architecture
I initially built the image pipeline around CIImage as a unifying abstraction. It seemed like a good idea because:
CIImage integrates cleanly with Vision, ARKit, AVFoundation, Metal, Core Graphics, etc.
It provides a rich set of out-of-the-box transforms and filters.
It is immutable and thread-safe, which significantly simplified concurrency in a multi-queue pipeline.
The LiDAR depth maps, semantic segmentation masks, etc. were treated as CIImages, with conversion to CVPixelBuffer or MTLTexture only at the edges when required.
Problem
I’ve run into cases where Core Image transformations do not preserve numeric fidelity for non-visual data.
Example:
Rendering a CIImage-backed segmentation mask into a larger CVPixelBuffer can cause label values to change in predictable but incorrect ways.
This occurs even when:
using nearest-neighbor sampling
disabling color management (workingColorSpace / outputColorSpace = NSNull)
applying identity or simple affine transforms
I’ve confirmed via controlled tests that:
Metal → CVPixelBuffer paths preserve values correctly
CIImage → CVPixelBuffer paths can introduce value changes when resampling or expanding the render target
This makes CIImage unsafe as a source of numeric truth for segmentation masks and depth-based logic, even though it works well for visualization, and I should have realized this much sooner.
Direction I’m considering
I’m now considering refactoring toward more intent-based abstractions instead of a single image type, for example:
Visual images: CIImage (camera frames, overlays, debugging, UI)
Scalar fields: depth / confidence maps backed by CVPixelBuffer + Metal
Label maps: segmentation masks backed by integer-preserving buffers (no interpolation, no transforms)
In this model, CIImage would still be used extensively — but primarily for visualization and perceptual processing, not as the container for numerically sensitive data.
Thread safety concern
One of the original advantages of CIImage was that it is thread-safe by design, and that was my biggest incentive.
For CVPixelBuffer / MTLTexture–backed data, I’m considering enforcing thread safety explicitly via:
Swift Concurrency (actor-owned data, explicit ownership)
Questions
For those may have experience with CV / AR / imaging-heavy iOS apps, I was hoping to know the following:
Is this separation of image intent (visual vs numeric vs categorical) a reasonable architectural direction?
Do you generally keep CIImage at the heart of your pipeline, or push it to the edges (visualization only)?
How do you manage thread safety and ownership when working heavily with CVPixelBuffer and Metal? Using actor-based abstractions, GCD, or adhoc?
Are there any best practices or gotchas around using Core Image with depth maps or segmentation masks that I should be aware of?
I’d really appreciate any guidance or experience-based advice. I suspect I’ve hit a boundary of Core Image’s design, and I’m trying to refactor in a way that doesn't involve too much immediate tech debt, remains robust and maintainable long-term.
Thank you in advance!
Also submitted as feedback (ID: FB20612561).
Tensorflow-metal fails on tensorflow versions above 2.18.1, but works fine on tensorflow 2.18.1
In a new python 3.12 virtual environment:
pip install tensorflow
pip install tensor flow-metal
python -c "import tensorflow as tf"
Prints error:
Traceback (most recent call last):
File "", line 1, in
File "/Users//pt/venv/lib/python3.12/site-packages/tensorflow/init.py", line 438, in
_ll.load_library(_plugin_dir)
File "/Users//pt/venv/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so
Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Reason: tried: '/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file)
Topic:
Machine Learning & AI
SubTopic:
General
Tags:
Developer Tools
Metal
Machine Learning
tensorflow-metal
I am trying to test FoundationModels in a Swift Playground in Xcode 26.2, macOS 26.3, and am running into an issue. The following simple code generates an error:
import FoundationModels
@Generable
struct Specifications {
@Guide(description: "Search for color")
var color: String
}
I see the following error message in the console:
error: AIPlayground.playground:4:8: external macro implementation type 'FoundationModelsMacros.GenerableMacro' could not be found for macro 'Generable(description:)'; plugin for module 'FoundationModelsMacros' not found
The Xcode editor does not appear to recognize the @Generable or @Guide macros, despite importing FoundationModels. What step/setting am I missing?
Things I did:
created an Intents Extension target
added "Supported Intents" to both my main app target and the intent extension, with "INAddTasksIntent" and "INCreateNoteIntent"
created the AppIntentVocabulary in my main app target
created the handlers in the code in the Intents Extension target
class AddTaskIntentHandler: INExtension, INAddTasksIntentHandling {
func resolveTaskTitles(for intent: INAddTasksIntent) async -> [INSpeakableStringResolutionResult] {
if let taskTitles = intent.taskTitles {
return taskTitles.map { INSpeakableStringResolutionResult.success(with: $0) }
} else {
return [INSpeakableStringResolutionResult.needsValue()]
}
}
func handle(intent: INAddTasksIntent) async -> INAddTasksIntentResponse {
// my code to handle this...
let response = INAddTasksIntentResponse(code: .success, userActivity: nil)
response.addedTasks = tasksCreated.map {
INTask(
title: INSpeakableString(spokenPhrase: $0.name),
status: .notCompleted,
taskType: .completable,
spatialEventTrigger: nil,
temporalEventTrigger: intent.temporalEventTrigger,
createdDateComponents: DateHelper.localCalendar().dateComponents([.year, .month, .day, .minute, .hour], from: Date.now),
modifiedDateComponents: nil,
identifier: $0.id
)
}
return response
}
}
class AddItemIntentHandler: INExtension, INCreateNoteIntentHandling {
func resolveTitle(for intent: INCreateNoteIntent) async -> INSpeakableStringResolutionResult {
if let title = intent.title {
return INSpeakableStringResolutionResult.success(with: title)
} else {
return INSpeakableStringResolutionResult.needsValue()
}
}
func resolveGroupName(for intent: INCreateNoteIntent) async -> INSpeakableStringResolutionResult {
if let groupName = intent.groupName {
return INSpeakableStringResolutionResult.success(with: groupName)
} else {
return INSpeakableStringResolutionResult.needsValue()
}
}
func handle(intent: INCreateNoteIntent) async -> INCreateNoteIntentResponse {
do {
// my code for handling this...
let response = INCreateNoteIntentResponse(code: .success, userActivity: nil)
response.createdNote = INNote(
title: INSpeakableString(spokenPhrase: itemName),
contents: itemNote.map { [INTextNoteContent(text: $0)] } ?? [],
groupName: INSpeakableString(spokenPhrase: list.name),
createdDateComponents: DateHelper.localCalendar().dateComponents([.day, .month, .year, .hour, .minute], from: Date.now),
modifiedDateComponents: nil,
identifier: newItem.id
)
return response
} catch {
return INCreateNoteIntentResponse(code: .failure, userActivity: nil)
}
}
}
uninstalled my app
restarted my physical device and simulator
Yet, when I say "Remind me to buy dog food in Index" (Index is the name of my app), as stated in the examples of INAddTasksIntent, Siri proceeds to say that a list named "Index" doesn't exist in apple Reminders app, instead of processing the request in my app.
Am I missing something?
Whats to code to warm it up once? Saw this in a developer video but cannot find it. Prevent cold run within an application.
Thank you in advance!
Environment:
macOS 26.2 (Tahoe)
Xcode 16.3
Apple Silicon (M4)
Sandboxed Mac App Store app
Description:
Repeated use of VNRecognizeTextRequest causes permanent memory growth in the host process. The physical footprint increases by approximately 3-15 MB per OCR call and never returns to baseline, even after all references to the request, handler, observations, and image are released.
`
private func selectAndProcessImage() {
let panel = NSOpenPanel()
panel.allowedContentTypes = [.image]
panel.allowsMultipleSelection = false
panel.canChooseDirectories = false
panel.message = "Select an image for OCR processing"
guard panel.runModal() == .OK, let url = panel.url else { return }
selectedImageURL = url
isProcessing = true
recognizedText = "Processing..."
// Run OCR on a background thread to keep UI responsive
let workItem = DispatchWorkItem {
let result = performOCR(on: url)
DispatchQueue.main.async {
recognizedText = result
isProcessing = false
}
}
DispatchQueue.global(qos: .userInitiated).async(execute: workItem)
}
private func performOCR(on url: URL) -> String {
// Wrap EVERYTHING in autoreleasepool so all ObjC objects are drained immediately
let resultText: String = autoreleasepool {
// Load image and convert to CVPixelBuffer for explicit memory control
guard let imageData = try? Data(contentsOf: url) else {
return "Error: Could not read image file."
}
guard let nsImage = NSImage(data: imageData) else {
return "Error: Could not create image from file data."
}
guard let cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil) else {
return "Error: Could not create CGImage."
}
let width = cgImage.width
let height = cgImage.height
// Create a CVPixelBuffer from the CGImage
var pixelBuffer: CVPixelBuffer?
let attrs: [String: Any] = [
kCVPixelBufferCGImageCompatibilityKey as String: true,
kCVPixelBufferCGBitmapContextCompatibilityKey as String: true
]
let status = CVPixelBufferCreate(
kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_32ARGB,
attrs as CFDictionary,
&pixelBuffer
)
guard status == kCVReturnSuccess, let buffer = pixelBuffer else {
return "Error: Could not create CVPixelBuffer (status: \(status))."
}
// Draw the CGImage into the pixel buffer
CVPixelBufferLockBaseAddress(buffer, [])
guard let context = CGContext(
data: CVPixelBufferGetBaseAddress(buffer),
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(buffer),
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue
) else {
CVPixelBufferUnlockBaseAddress(buffer, [])
return "Error: Could not create CGContext for pixel buffer."
}
context.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))
CVPixelBufferUnlockBaseAddress(buffer, [])
// Run OCR
let requestHandler = VNImageRequestHandler(cvPixelBuffer: buffer, options: [:])
let request = VNRecognizeTextRequest()
request.recognitionLevel = .accurate
request.usesLanguageCorrection = true
do {
try requestHandler.perform([request])
} catch {
return "Error during OCR: \(error.localizedDescription)"
}
guard let observations = request.results, !observations.isEmpty else {
return "No text found in image."
}
let lines = observations.compactMap { observation in
observation.topCandidates(1).first?.string
}
// Explicitly nil out the pixel buffer before the pool drains
pixelBuffer = nil
return lines.joined(separator: "\n")
}
// Everything — Data, NSImage, CGImage, CVPixelBuffer, VN objects — released here
return resultText
}
`
We're in the process of migrating our app's custom intents from the older SiriKit Custom Intents framework to App Intents. The migration has been straightforward for our app-specific actions, and we appreciate the improved discoverability and Apple Intelligence integration that App Intents provides.
However, we also implement SiriKit domain intents for calling and messaging:
INStartCallIntent / INStartCallIntentHandling
INSendMessageIntent / INSendMessageIntentHandling
These require us to maintain an Intents Extension to handle contact resolution and the actual call/message operations.
Our questions:
Is there a planned App Intents equivalent for these SiriKit domains (calling, messaging), or is the Intents Extension approach still the recommended path?
If we want to support phrases like "Call [contact] on [AppName]" or "Send a message to [contact] on [AppName]" with Apple Intelligence integration, is there any way to achieve this with App Intents today?
Are there any WWDC sessions or documentation we may have missed that addresses the migration path for SiriKit domain intents?
What we've reviewed:
"Migrate custom intents to App Intents" Tech Talk
"Bring your app's core features to users with App Intents" (WWDC24)
App Intents documentation
These resources clearly explain custom intent migration but don't seem to address the system domain intents.
Our current understanding:
Based on our research, it appears SiriKit domain intents should remain on the older framework, while custom intents should migrate to App Intents. We'd like to confirm this is correct and understand if there's a future direction we should be planning for.
Thank you!
I have been able to train an adapter on Google's Colaboratory.
I am able to start a LanguageModelSession and load it with my adapter.
The problem is that after one simple prompt, the context window is 90% full.
If I start the session without the adapter, the same simple prompt consumes only 1% of the context window.
Has anyone encountered this? I asked Claude AI and it seems to think that my training script needs adjusting. Grok on the other hand is (wrongly, I tried) convinced that I just need to tweak some parameters of LanguageModelSession or SystemLanguageModel.
Thanks for any tips.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi all, I spent the last few months developing an MLX/Ollama local AI Benchmarking suite for Apple Silicon, written in pure Swift and signed with an Apple Developer Certificate, open source, GPL, and free. I would love some feedback to continue development. It is the only benchmarking suite I know of that supports live power metrics and MLX natively, as well as quick exports for benchmark results, and an arena mode, Model A vs B with history. I really want this project to succeed, and have widespread use, so getting 75 stars on the github repo makes it eligible for Homebrew/Cask distribution.
Github Repo
Topic:
Machine Learning & AI
SubTopic:
Core ML
I get the following error when running this command in a Jupyter notebook:
v = tf.Variable(initial_value=tf.random.normal(shape=(3, 1)))
v[0, 0].assign(3.)
Environment:
python == 3.11.14
tensorflow==2.19.1
tensorflow-metal==1.2.0
{
"name": "InvalidArgumentError",
"message": "Cannot assign a device for operation ResourceStridedSliceAssign: Could not satisfy explicit device specification '/job:localhost/replica:0/task:0/device:GPU:0' because no supported kernel for GPU devices is available.\nColocation Debug Info:\nColocation group had the following types and supported devices: \nRoot Member(assigned_device_name_index_=1 requested_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' assigned_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' resource_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]\nResourceStridedSliceAssign: CPU \n_Arg: GPU CPU \n\nColocation members, user-requested devices, and framework assigned devices, if any:\n ref (_Arg) framework assigned device=/job:localhost/replica:0/task:0/device:GPU:0\n ResourceStridedSliceAssign (ResourceStridedSliceAssign) /job:localhost/replica:0/task:0/device:GPU:0\n\nOp: ResourceStridedSliceAssign\n
[...]
[[{{node ResourceStridedSliceAssign}}]] [Op:ResourceStridedSliceAssign] name: strided_slice/_assign"
}
It seems like the ResourceStridedSliceAssign operation is not implemented for the GPU
Hello,
I am interested in using jax-metal to train ML models using Apple Silicon. I understand this is experimental.
After installing jax-metal according to https://developer.apple.com/metal/jax/, my python code fails with the following error
JaxRuntimeError: UNKNOWN: -:0:0: error: unknown attribute code: 22
-:0:0: note: in bytecode version 6 produced by: StableHLO_v1.12.1
My issue is identical to the one reported here https://github.com/jax-ml/jax/issues/26968#issuecomment-2733120325, and is fixed by pinning to jax-metal 0.1.1., jax 0.5.0 and jaxlib 0.5.0.
Thank you!
Hi
From https://developer.apple.com/metal/jax/
I checked all active workflows on https://github.com/jax-ml/jax
and any open issues with tags Metal
and seems in DEC 2025 the Jax maintainers have closed all issues citing No active development on Jax-metal and the project seems dead.
We need to know how can we leverage Apple silicon for accelerated projects using popular academia library and tools .
Is the JAX project still going to be supported or Apple has plans to bring something of tis own that might be platform agnostic .
Thanks
Topic:
Machine Learning & AI
SubTopic:
Create ML