Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Efficient Clustering of Images Using VNFeaturePrintObservation.computeDistance
Hi everyone, I'm working with VNFeaturePrintObservation in Swift to compute the similarity between images. The computeDistance function allows me to calculate the distance between two images, and I want to cluster similar images based on these distances. Current Approach Right now, I'm using a brute-force approach where I compare every image against every other image in the dataset. This results in an O(n^2) complexity, which quickly becomes a bottleneck. With 5000 images, it takes around 10 seconds to complete, which is too slow for my use case. Question Are there any efficient algorithms or data structures I can use to improve performance? If anyone has experience with optimizing feature vector clustering or has suggestions on how to scale this efficiently, I'd really appreciate your insights. Thanks!
0
0
524
Feb ’25
Cmake build unable to 'find' Foundation framework
I'm trying to build llama.cpp, a popular tool for running LLMs locally on macos15.1.1 (24B91) Sonoma using cmake but am encountering errors. Here is the stack overflow post regarding the issue: https://stackoverflow.com/questions/79304015/cmake-unable-to-find-foundation-framework-on-macos-15-1-1-24b91?noredirect=1#comment139853319_79304015
0
0
562
Dec ’24
Accessing Apple Intelligence APIs: Custom Prompt Support and Inference Capabilities
Hello Apple Developer Community, I'm exploring the integration of Apple Intelligence features into my mobile application and have a couple of questions regarding the current and upcoming API capabilities: Custom Prompt Support: Is there a way to pass custom prompts to Apple Intelligence to generate specific inferences? For instance, can we provide a unique prompt to the Writing Tools or Image Playground APIs to obtain tailored outputs? Direct Inference Capabilities: Beyond the predefined functionalities like text rewriting or image generation, does Apple Intelligence offer APIs that allow for more generalized inference tasks based on custom inputs? I understand that Apple has provided APIs such as Writing Tools, Image Playground, and Genmoji. However, I'm interested in understanding the extent of customization and flexibility these APIs offer, especially concerning custom prompts and generalized inference. Additionally, are there any plans or timelines for expanding these capabilities, perhaps with the introduction of new SDKs or frameworks that allow deeper integration and customization? Any insights, documentation links, or experiences shared would be greatly appreciated. Thank you in advance for your assistance!
3
0
312
Jun ’25
Keras on Mac (M4) is giving inconsistent results compared to running on NVIDIA GPUs
I have seen inconsistent results for my Colab machine learning notebooks running locally on a Mac M4, compared to running the same notebook code on either T4 (in Colab) or a RTX3090 locally. To illustrate the problems I have set up a notebook that implements two simple CNN models that solves the Fashion-MNIST problem. https://colab.research.google.com/drive/11BhtHhN079-BWqv9QvvcSD9U4mlVSocB?usp=sharing For the good model with 2M parameters I get the following results: T4 (Colab, JAX): Test accuracy: 0.925 3090 (Local PC via ssh tunnel, Jax): Test accuracy: 0.925 Mac M4 (Local, JAX): Test accuracy: 0.893 Mac M4 (Local, Tensorflow): Test accuracy: 0.893 That is, I see a significant drop in performance when I run on the Mac M4 compared to the NVIDIA machines, and it seems to be independent of backend. I however do not know how to pinpoint this to either Keras or Apple’s METAL implementation. I have reported this to Keras: https://colab.research.google.com/drive/11BhtHhN079-BWqv9QvvcSD9U4mlVSocB?usp=sharing but as this can be (likely is?) an Apple Metal issue, I wanted to report this here as well. On the mac I am running the following Python libraries: keras 3.9.1 tensorflow 2.19.0 tensorflow-metal 1.2.0 jax 0.5.3 jax-metal 0.1.1 jaxlib 0.5.3
0
0
118
Mar ’25
Is there anywhere to get precompiled WhisperKit models for Swift?
If try to dynamically load WhipserKit's models, as in below, the download never occurs. No error or anything. And at the same time I can still get to the huggingface.co hosting site without any headaches, so it's not a blocking issue. let config = WhisperKitConfig( model: "openai_whisper-large-v3", modelRepo: "argmaxinc/whisperkit-coreml" ) So I have to default to the tiny model as seen below. I have tried so many ways, using ChatGPT and others, to build the models on my Mac, but too many failures, because I have never dealt with builds like that before. Are there any hosting sites that have the models (small, medium, large) already built where I can download them and just bundle them into my project? Wasted quite a large amount of time trying to get this done. import Foundation import WhisperKit @MainActor class WhisperLoader: ObservableObject { var pipe: WhisperKit? init() { Task { await self.initializeWhisper() } } private func initializeWhisper() async { do { Logging.shared.logLevel = .debug Logging.shared.loggingCallback = { message in print("[WhisperKit] \(message)") } let pipe = try await WhisperKit() // defaults to "tiny" self.pipe = pipe print("initialized. Model state: \(pipe.modelState)") guard let audioURL = Bundle.main.url(forResource: "44pf", withExtension: "wav") else { fatalError("not in bundle") } let result = try await pipe.transcribe(audioPath: audioURL.path) print("result: \(result)") } catch { print("Error: \(error)") } } }
0
0
98
Jun ’25
Train adapter with tool calling
Documentation on adapter train is lacking any details related to training on dataset with tool calling. And page about tool calling itself only explain how to use it from Swift without any internal details useful in training. Question is how schema should looks like for including tool calling in dataset?
1
0
234
Jun ’25
Can I Perform Hybrid Execution on Neural Engine and CPU with 16-bit Precision?
Hello, I have a question regarding hybrid execution for deep learning models on Apple's Neural Engine and CPU. I am aware that setting the precision of some layers to 32-bit allows hybrid execution across both the Neural Engine and the CPU. However, I would like to know if it is possible to achieve the same with 16-bit precision. Is there any specific configuration or workaround to enable hybrid execution in this case? Any guidance or documentation references would be greatly appreciated. Thank you!
0
0
437
Jan ’25
How to implement a CoreML model into an iOS app properly?
I am working on a lung cancer scanning app in for iOS with a CoreML model and when I test my app on a physical device, the model results in the same prediction 100% of the time. I even changed the names around and still resulted in the same case. I have listed my labels in cases and when its just stuck on the same case (case 1) My code is below: https://github.com/ShivenKhurana1/Detect-to-Protect-App/blob/main/DetectToProtect/SecondView.swift I couldn't add the code as it was too long so I hope github link is fine!
1
0
137
Mar ’25
Is it possible to pass the streaming output of Foundation Models down a function chain
I am writing a custom package wrapping Foundation Models which provides a chain-of-thought with intermittent self-evaluation among other things. At first I was designing this package with the command line in mind, but after seeing how well it augments the models and makes them more intelligent I wanted to try and build a SwiftUI wrapper around the package. When I started I was using synchronous generation rather than streaming, but to give the best user experience (as I've seen in the WWDC sessions) it is necessary to provide constant feedback to the user that something is happening. I have created a super simplified example of my setup so it's easier to understand. First, there is the Reasoning conversation item, which can be converted to an XML representation which is then fed back into the model (I've found XML works best for structured input) public typealias ConversationContext = XMLDocument extension ConversationContext { public func toPlainText() -> String { return xmlString(options: [.nodePrettyPrint]) } } /// Represents a reasoning item in a conversation, which includes a title and reasoning content. /// Reasoning items are used to provide detailed explanations or justifications for certain decisions or responses within a conversation. @Generable(description: "A reasoning item in a conversation, containing content and a title.") struct ConversationReasoningItem: ConversationItem { @Guide(description: "The content of the reasoning item, which is your thinking process or explanation") public var reasoningContent: String @Guide(description: "A short summary of the reasoning content, digestible in an interface.") public var title: String @Guide(description: "Indicates whether reasoning is complete") public var done: Bool } extension ConversationReasoningItem: ConversationContextProvider { public func toContext() -> ConversationContext { // <ReasoningItem title="${title}"> // ${reasoningContent} // </ReasoningItem> let root = XMLElement(name: "ReasoningItem") root.addAttribute(XMLNode.attribute(withName: "title", stringValue: title) as! XMLNode) root.stringValue = reasoningContent return ConversationContext(rootElement: root) } } Then there is the generator, which creates a reasoning item from a user query and previously generated items: struct ReasoningItemGenerator { var instructions: String { """ <omitted for brevity> """ } func generate(from input: (String, [ConversationReasoningItem])) async throws -> sending LanguageModelSession.ResponseStream<ConversationReasoningItem> { let session = LanguageModelSession(instructions: instructions) // build the context for the reasoning item out of the user's query and the previous reasoning items let userQuery = "User's query: \(input.0)" let reasoningItemsText = input.1.map { $0.toContext().toPlainText() }.joined(separator: "\n") let context = userQuery + "\n" + reasoningItemsText let reasoningItemResponse = try await session.streamResponse( to: context, generating: ConversationReasoningItem.self) return reasoningItemResponse } } I'm not sure if returning LanguageModelSession.ResponseStream<ConversationReasoningItem> is the right move, I am just trying to imitate what session.streamResponse returns. Then there is the orchestrator, which I can't figure out. It receives the streamed ConversationReasoningItems from the Generator and is responsible for streaming those to SwiftUI later and also for evaluating each reasoning item after it is complete to see if it needs to be regenerated (to keep the model on-track). I want the users of the orchestrator to receive partially generated reasoning items as they are being generated by the generator. Later, when they finish, if the evaluation passes, the item is kept, but if it fails, the reasoning item should be removed from the stream before a new one is generated. So in-flight reasoning items should be outputted aggresively. I really am having trouble figuring this out so if someone with more knowledge about asynchronous stuff in Swift, or- even better- someone who has worked on the Foundation Models framework could point me in the right direction, that would be awesome!
0
0
251
Jul ’25
Converting TF2 object detection to CoreML
I've spent way too long today trying to convert an Object Detection TensorFlow2 model to a CoreML object classifier (with bounding boxes, labels and probability score) The 'SSD MobileNet v2 320x320' is here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md And I've been following all sorts of posts and ChatGPT https://apple.github.io/coremltools/docs-guides/source/tensorflow-2.html#convert-a-tensorflow-concrete-function https://developer.apple.com/videos/play/wwdc2020/10153/?time=402 To convert it. I keep hitting the same errors though, mostly around: NotImplementedError: Expected model format: [SavedModel | concrete_function | tf.keras.Model | .h5 | GraphDef], got <ConcreteFunction signature_wrapper(input_tensor) at 0x366B87790> I've had varying success including missing output labels/predictions. But I simply want to create the CoreML model with all the right inputs and outputs (including correct names) as detailed in the docs here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tf2.md It goes without saying I don't have much (any) experience with this stuff including Python so the whole thing's been a bit of a headache. If anyone is able to help that would be great. FWIW I'm not attached to any one specific model, but what I do need at minimum is a CoreML model that can detect objects (has to at least include lights and lamps) within a live video image, detecting where in the image the object is. The simplest script I have looks like this: import coremltools as ct import tensorflow as tf model = tf.saved_model.load("~/tf_models/ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model") concrete_func = model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY] mlmodel = ct.convert( concrete_func, source="tensorflow", inputs=[ct.TensorType(shape=(1, 320, 320, 3))] ) mlmodel.save("YourModel.mlpackage", save_format="mlpackage")
1
0
422
Jul ’25
The answer of "apple" goes to guardrailViolation?
I have been using "apple" to test foundation models. I thought this is local, but today the answer changed - half way through explanation, suddenly guardrailViolation error was activated! And yesterday, all reference to "Apple II", "Apple III" now refers me to consult apple.com! Does foundation models connect to Internet for answer? Using beta 3.
3
0
167
Jul ’25
Getting CoreML to run inference on already allocated gpu buffers
I am running some experiments with WebGPU using the wgpu crate in rust. I have some Buffers already allocated in the GPU. Is it possible to use those already existing buffers directly as inputs to a predict call in CoreML? I want to prevent gpu to cpu download time as much as possible. Or are there any other ways to do something like this. Is this only possible using the latest Tensor object which came out with Metal 4 ?
0
0
424
2w
SpeechTranscriber time indexes - detect pauses?
I'm experimenting with the new SpeechTranscriber in macOS/iOS 26, transcribing speech from a prerecorded mp4 file. Speed and quality are amazing! I've told the transcriber to include time indexes. Each run is always exactly one word, which can be very useful. When I look at the indexes the end of one run is always identical to the start of the next run, even if there's a pause. I'd like to identify pauses, perhaps to generate something like phrases for subtitling. With each run of text going into the next I can't do this, other than using punctuation - which might be rather rough. Any suggestions on detecting pauses, or getting that kind of metadata from the transcriber? Here's a short sample, showing each run with the start, end, and characters in the run: 105.9 --> 107.04 I 107.04 --> 107.16 think 107.16 --> 108.0 more 108.0 --> 108.42 lighting 108.42 --> 108.6 is 108.6 --> 108.72 definitely 108.72 --> 109.2 needed, 109.2 --> 109.92 downtown. 109.98 --> 110.4 My 110.4 --> 110.52 only 110.52 --> 110.7 question 110.7 --> 111.06 is, 111.06 --> 111.48 poll 111.48 --> 111.78 five, 111.78 --> 111.84 that 111.84 --> 112.08 you're 112.08 --> 112.38 increasing 112.38 --> 112.5 the 112.5 --> 113.34 50,000? 113.4 --> 113.58 Where 113.58 --> 113.88 exactly
0
0
171
Jun ’25
Named Entity Recognition Model for Measurements
In an under-development MacOS & iOS app, I need to identify various measurements from OCR'ed text: length, weight, counts per inch, area, percentage. The unit type (e.g. UnitLength) needs to be identified as well as the measurement's unit (e.g. .inches) in order to convert the measurement to the app's internal standard (e.g. centimetres), the value of which is stored the relevant CoreData entity. The use of NLTagger and NLTokenizer is problematic because of the various representations of the measurements: e.g. "50g.", "50 g", "50 grams", "1 3/4 oz." Currently, I use a bespoke algorithm based on String contains and step-wise evaluation of characters, which is reasonably accurate but requires frequent updating as further representations are detected. I'm aware of the Python SpaCy model being capable of NER Measurement recognition, but am reluctant to incorporate a Python-based solution into a production app. (ref [https://developer.apple.com/forums/thread/30092]) My preference is for an open-source NER Measurement model that can be used as, or converted to, some form of a Swift compatible Machine Learning model. Does anyone know of such a model?
0
0
114
Mar ’25
ActivityClassifier doesn't classify movement
I'm using a custom create ML model to classify the movement of a user's hand in a game, The classifier has 3 different spell movements, but my code constantly predicts all of them at an equal 1/3 probability regardless of movement which leads me to believe my code isn't correct (as opposed to the model) which in CreateML at least gives me a heavily weighted prediction My code is below. On adding debug prints everywhere all the data looks good to me and matches similar to my test CSV data So I'm thinking my issue must be in the setup of my model code? /// Feeds samples into the model and keeps a sliding window of the last N frames. final class WandGestureStreamer { static let shared = WandGestureStreamer() private let model: SpellActivityClassifier private var samples: [Transform] = [] private let windowSize = 100 // number of frames the model expects /// RNN hidden state passed between inferences private var stateIn: MLMultiArray /// Last transform dropped from the window for continuity private var lastDropped: Transform? private init() { let config = MLModelConfiguration() self.model = try! SpellActivityClassifier(configuration: config) // Initialize stateIn to the model’s required shape let constraint = self.model.model.modelDescription .inputDescriptionsByName["stateIn"]! .multiArrayConstraint! self.stateIn = try! MLMultiArray(shape: constraint.shape, dataType: .double) } /// Call once per frame with the latest wand position (or any feature vector). func appendSample(_ sample: Transform) { samples.append(sample) // drop oldest frame if over capacity, retaining it for delta at window start if samples.count > windowSize { lastDropped = samples.removeFirst() } } func classifyIfReady(threshold: Double = 0.6) -> (label: String, confidence: Double)? { guard samples.count == windowSize else { return nil } do { let input = try makeInput(initialState: stateIn) let output = try model.prediction(input: input) // Save state for continuity stateIn = output.stateOut let best = output.label let conf = output.labelProbability[best] ?? 0 // If you’ve recognized a gesture with high confidence: if conf > threshold { return (best, conf) } else { return nil } } catch { print("Error", error.localizedDescription, error) return nil } } /// Constructs a SpellActivityClassifierInput from recorded wand transforms. func makeInput(initialState: MLMultiArray) throws -> SpellActivityClassifierInput { let count = samples.count as NSNumber let shape = [count] let timeArr = try MLMultiArray(shape: shape, dataType: .double) let dxArr = try MLMultiArray(shape: shape, dataType: .double) let dyArr = try MLMultiArray(shape: shape, dataType: .double) let dzArr = try MLMultiArray(shape: shape, dataType: .double) let rwArr = try MLMultiArray(shape: shape, dataType: .double) let rxArr = try MLMultiArray(shape: shape, dataType: .double) let ryArr = try MLMultiArray(shape: shape, dataType: .double) let rzArr = try MLMultiArray(shape: shape, dataType: .double) for (i, sample) in samples.enumerated() { let previousSample = i > 0 ? samples[i - 1] : lastDropped let model = WandMovementRecording.DataModel(transform: sample, previous: previousSample) // print("model", model) timeArr[i] = NSNumber(value: model.timestamp) dxArr[i] = NSNumber(value: model.dx) dyArr[i] = NSNumber(value: model.dy) dzArr[i] = NSNumber(value: model.dz) let rot = model.rotation rwArr[i] = NSNumber(value: rot.w) rxArr[i] = NSNumber(value: rot.x) ryArr[i] = NSNumber(value: rot.y) rzArr[i] = NSNumber(value: rot.z) } return SpellActivityClassifierInput( dx: dxArr, dy: dyArr, dz: dzArr, rotation_w: rwArr, rotation_x: rxArr, rotation_y: ryArr, rotation_z: rzArr, timestamp: timeArr, stateIn: initialState ) } }
1
0
375
Jul ’25
VNDetectFaceRectanglesRequest does not use the Neural Engine?
I'm on Tahoe 26.1 / M3 Macbook Air. I'm using VNDetectFaceRectanglesRequest as properly as possible, as in the minimal command line program attached below. For some reason, I always get: MLE5Engine is disabled through the configuration printed. I couldn't find any notes on developer docs saying that VNDetectFaceRectanglesRequest can not use the Apple Neural Engine. I'm assuming there is something wrong with my code however I wasn't able to find any remarks from documentation where it might be. I wasn't able to find the above error message online either. I would appreciate your help a lot and thank you in advance. The code below accesses the video from AVCaptureDevice.DeviceType.builtInWideAngleCamera. Currently it directly chooses the 0th format which has the largest resolution (Full HD on my M3 MBA) and "4:2:0" color "v" reduced color component spectrum encoding ("420v"). After accessing video, it performs a VNDetectFaceRectanglesRequest. It prints "VNDetectFaceRectanglesRequest completion Handler called" many times, then prints the error message above, then continues printing "VNDetectFaceRectanglesRequest completion Handler called" until the user quits it. To run it in Xcode, File > New project > Mac command line tool. Pasting the code below, then click on the root file > Targets > Signing & Capabilities > Hardened Runtime > Resource Access > Camera. A possible explanation could be that either Apple's internal CoreML code for this function works on GPU/CPU only or it doesn't accept 420v as supplied by the Macbook Air camera import AVKit import Vision var videoDataOutput: AVCaptureVideoDataOutput = AVCaptureVideoDataOutput() var detectionRequests: [VNDetectFaceRectanglesRequest]? var videoDataOutputQueue: DispatchQueue = DispatchQueue(label: "queue") class XYZ: /*NSViewController or NSObject*/NSObject, AVCaptureVideoDataOutputSampleBufferDelegate { func viewDidLoad() { //super.viewDidLoad() let session = AVCaptureSession() let inputDevice = try! self.configureFrontCamera(for: session) self.configureVideoDataOutput(for: inputDevice.device, resolution: inputDevice.resolution, captureSession: session) self.prepareVisionRequest() session.startRunning() } fileprivate func highestResolution420Format(for device: AVCaptureDevice) -> (format: AVCaptureDevice.Format, resolution: CGSize)? { let deviceFormat = device.formats[0] print(deviceFormat) let dims = CMVideoFormatDescriptionGetDimensions(deviceFormat.formatDescription) let resolution = CGSize(width: CGFloat(dims.width), height: CGFloat(dims.height)) return (deviceFormat, resolution) } fileprivate func configureFrontCamera(for captureSession: AVCaptureSession) throws -> (device: AVCaptureDevice, resolution: CGSize) { let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: .video, position: AVCaptureDevice.Position.unspecified) let device = deviceDiscoverySession.devices.first! let deviceInput = try! AVCaptureDeviceInput(device: device) captureSession.addInput(deviceInput) let highestResolution = self.highestResolution420Format(for: device)! try! device.lockForConfiguration() device.activeFormat = highestResolution.format device.unlockForConfiguration() return (device, highestResolution.resolution) } fileprivate func configureVideoDataOutput(for inputDevice: AVCaptureDevice, resolution: CGSize, captureSession: AVCaptureSession) { videoDataOutput.setSampleBufferDelegate(self, queue: videoDataOutputQueue) captureSession.addOutput(videoDataOutput) } fileprivate func prepareVisionRequest() { let faceDetectionRequest: VNDetectFaceRectanglesRequest = VNDetectFaceRectanglesRequest(completionHandler: { (request, error) in print("VNDetectFaceRectanglesRequest completion Handler called") }) // Start with detection detectionRequests = [faceDetectionRequest] } // MARK: AVCaptureVideoDataOutputSampleBufferDelegate // Handle delegate method callback on receiving a sample buffer. public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { var requestHandlerOptions: [VNImageOption: AnyObject] = [:] let cameraIntrinsicData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil) if cameraIntrinsicData != nil { requestHandlerOptions[VNImageOption.cameraIntrinsics] = cameraIntrinsicData } let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)! // No tracking object detected, so perform initial detection let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: CGImagePropertyOrientation.up, options: requestHandlerOptions) try! imageRequestHandler.perform(detectionRequests!) } } let X = XYZ() X.viewDidLoad() sleep(9999999)
0
0
329
4w
Hardware Support for Low Precision Data Types?
Hi all, I'm trying to find out if/when we can expect mxfp8/mxfp4 support on Apple Silicon. I've noticed that mlx now has casting data types, but all computation is still done in bf16. Would be great to reduce power consumption with support for these lower precision data types since edge inference is already typically done at a lower precision! Thanks in advance.
0
0
226
Nov ’25