Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Initializing session with transcript ignores tools
When I initialize a session with an existing transcript using this initializer: public convenience init(model: SystemLanguageModel = .default, guardrails: LanguageModelSession.Guardrails = .default, tools: [any Tool] = [], transcript: Transcript) The tools get ignored. I noticed that when doing that, the model never use the tools. When inspecting the transcript, I can see that the instruction entry does not have any tools available to it. I tried this for both transcripts that already include an instruction entry and ones that don't - both yielding the same result.. Is this the intended behavior / am I missing something here?
1
0
208
Jul ’25
What's the best way to load adapters to try?
I'm new to Swift and was hoping the Playground would support loading adaptors. When I tried, I got a permissions error - thinking it's because it's not in the project and Playgrounds don't like going outside the project? A tutorial and some sample code would be helpful. Also some benchmarks on how long it's expected to take. Selfishly I'm on an M2 Mac Mini.
1
0
294
Jul ’25
My app crash in the Portrait private framework
Incident Identifier: 4C22F586-71FB-4644-B823-A4B52D158057 CrashReporter Key: adc89b7506c09c2a6b3a9099cc85531bdaba9156 Hardware Model: Mac16,10 Process: PRISMLensCore [16561] Path: /Applications/PRISMLens.app/Contents/Resources/app.asar.unpacked/node_modules/core-node/PRISMLensCore.app/PRISMLensCore Identifier: com.prismlive.camstudio Version: (null) ((null)) Code Type: ARM-64 Parent Process: ? [16560] Date/Time: (null) OS Version: macOS 15.4 (24E5228e) Report Version: 104 Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x00000000 at 0x0000000000000000 Crashed Thread: 34 Application Specific Information: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[__NSArrayM insertObject:atIndex:]: object cannot be nil' Thread 34 Crashed: 0 CoreFoundation 0x000000018ba4dde4 0x18b960000 + 974308 (__exceptionPreprocess + 164) 1 libobjc.A.dylib 0x000000018b512b60 0x18b4f8000 + 109408 (objc_exception_throw + 88) 2 CoreFoundation 0x000000018b97e69c 0x18b960000 + 124572 (-[__NSArrayM insertObject:atIndex:] + 1276) 3 Portrait 0x0000000257e16a94 0x257da3000 + 473748 (-[PTMSRResize addAdditionalOutput:] + 604) 4 Portrait 0x0000000257de91c0 0x257da3000 + 287168 (-[PTEffectRenderer initWithDescriptor:metalContext:useHighResNetwork:faceAttributesNetwork:humanDetections:prevTemporalState:asyncInitQueue:sharedResources:] + 6204) 5 Portrait 0x0000000257dab21c 0x257da3000 + 33308 (__33-[PTEffect updateEffectDelegate:]_block_invoke.241 + 164) 6 libdispatch.dylib 0x000000018b739b2c 0x18b738000 + 6956 (_dispatch_call_block_and_release + 32) 7 libdispatch.dylib 0x000000018b75385c 0x18b738000 + 112732 (_dispatch_client_callout + 16) 8 libdispatch.dylib 0x000000018b742350 0x18b738000 + 41808 (_dispatch_lane_serial_drain + 740) 9 libdispatch.dylib 0x000000018b742e2c 0x18b738000 + 44588 (_dispatch_lane_invoke + 388) 10 libdispatch.dylib 0x000000018b74d264 0x18b738000 + 86628 (_dispatch_root_queue_drain_deferred_wlh + 292) 11 libdispatch.dylib 0x000000018b74cae8 0x18b738000 + 84712 (_dispatch_workloop_worker_thread + 540) 12 libsystem_pthread.dylib 0x000000018b8ede64 0x18b8eb000 + 11876 (_pthread_wqthread + 292) 13 libsystem_pthread.dylib 0x000000018b8ecb74 0x18b8eb000 + 7028 (start_wqthread + 8)
1
0
81
Mar ’25
Converting TF2 object detection to CoreML
I've spent way too long today trying to convert an Object Detection TensorFlow2 model to a CoreML object classifier (with bounding boxes, labels and probability score) The 'SSD MobileNet v2 320x320' is here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md And I've been following all sorts of posts and ChatGPT https://apple.github.io/coremltools/docs-guides/source/tensorflow-2.html#convert-a-tensorflow-concrete-function https://developer.apple.com/videos/play/wwdc2020/10153/?time=402 To convert it. I keep hitting the same errors though, mostly around: NotImplementedError: Expected model format: [SavedModel | concrete_function | tf.keras.Model | .h5 | GraphDef], got <ConcreteFunction signature_wrapper(input_tensor) at 0x366B87790> I've had varying success including missing output labels/predictions. But I simply want to create the CoreML model with all the right inputs and outputs (including correct names) as detailed in the docs here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tf2.md It goes without saying I don't have much (any) experience with this stuff including Python so the whole thing's been a bit of a headache. If anyone is able to help that would be great. FWIW I'm not attached to any one specific model, but what I do need at minimum is a CoreML model that can detect objects (has to at least include lights and lamps) within a live video image, detecting where in the image the object is. The simplest script I have looks like this: import coremltools as ct import tensorflow as tf model = tf.saved_model.load("~/tf_models/ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model") concrete_func = model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY] mlmodel = ct.convert( concrete_func, source="tensorflow", inputs=[ct.TensorType(shape=(1, 320, 320, 3))] ) mlmodel.save("YourModel.mlpackage", save_format="mlpackage")
1
0
422
Jul ’25
tensorflow 2.20 broken support
Hi, testing latest tensorflow-metal plugin with tensorflow 2.20 doesn't work.. using python Python 3.12.11 (main, Jun 3 2025, 15:41:47) [Clang 17.0.0 (clang-1700.0.13.3)] on darwin simple testing shows error: import tensorflow as tf Traceback (most recent call last): File "", line 1, in File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/init.py", line 438, in _ll.load_library(_plugin_dir) File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib Reason: tried: '/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file) tf.config.experimental.list_physical_devices('GPU') Traceback (most recent call last): File "", line 1, in NameError: name 'tf' is not defined I fixed this error by copying _pywrap_tensorflow_internal.so where it's searched.. 1)mkdir /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64 2)mkdir /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/ 3)cp /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/ then fails symbol not found: Symbol not found: __ZN10tensorflow28_AttrValue_default_instance_E in libmetal_plugin.dylib full log: with import tensorflow as tf Traceback (most recent call last): File "", line 1, in File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/init.py", line 438, in _ll.load_library(_plugin_dir) File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: __ZN10tensorflow28_AttrValue_default_instance_E Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib Expected in: <2FF91C8B-0CB6-3E66-96B7-092FDF36772E> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so
1
0
625
Oct ’25
Help Needed: SwiftUI View with Camera Integration and Core ML Object Recognition
Hi everyone, I'm working on a SwiftUI app and need help building a view that integrates the device's camera and uses a pre-trained Core ML model for real-time object recognition. Here's what I want to achieve: Open the device's camera from a SwiftUI view. Capture frames from the camera feed and analyze them using a Create ML-trained Core ML model. If a specific figure/object is recognized, automatically close the camera view and navigate to another screen in my app. I'm looking for guidance on: Setting up live camera capture in SwiftUI. Using Core ML and Vision frameworks for real-time object recognition in this context. Managing navigation between views when the recognition condition is met. Any advice, code snippets, or examples would be greatly appreciated! Thanks in advance!
1
0
732
Jan ’25
Create ML Model Shows Wrong output or predictions in xcode
I am working on a CoreML image classification model in Xcode, which takes a 299x299 image and attempts to classify hand-drawn sketches. The model was trained using Create ML and works perfectly when tested in the Create ML preview. However, when used in Xcode application, the classification results are incorrect. I have already verified that the image is correctly resized to 299x299 pixels, matching the input size of the model. The classification always returns incorrect results, even when using images that were correctly classified during training. I originally used kCVPixelFormatType_32ARGB, but I read that CoreML typically expects BGRA format. I updated my conversion function to use kCVPixelFormatType_32BGRA and CGImageAlphaInfo.premultipliedLast, but the issue persists. This makes me suspect that either the pixel format is still incorrect or that something went wrong during the .mlmodelc compilation.
1
2
469
Jan ’25
Image Playground Error: Unable to Generate Images Using externalProvider Style
I’m working on generating images using Image Playground. The code works fine for other styles but fails when using an external provider. I don’t see any other requirements mentioned in the documentation. Has anyone else encountered a similar issue? Here’s the relevant code snippet: https://developer.apple.com/documentation/imageplayground/imageplaygroundstyle/externalprovider?changes=_2 The error message is also not very helpful. It simply states that the creation failed. Note: I have enabled ChatGPT Plus, and the image generation using ChatGPT styles works fine when using the Playground app. do { let creator = try await ImageCreator() let concept = ImagePlaygroundConcept.text("Love") let images = creator.images(for: [concept], style: .externalProvider, limit: 1) for try await image in images { // Handle image break } } catch { // Handle error } I’m using the iOS 26 RC, and when I print creator.availableStyles, it doesn’t display the external Provider. [ImagePlayground.ImagePlaygroundStyle(id: "animation", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "emoji", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "illustration", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "sketch", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "messages-background", _representationInfo: nil)]
1
0
849
Sep ’25
What is the proper way to integrate a CoreML app into Xcode
Hi, I have been trying to integrate a CoreML model into Xcode. The model was made using tensorflow layers. I have included both the model info and a link to the app repository. I am mainly just really confused on why its not working. It seems to only be printing the result for case 1 (there are 4 cases labled, case 0, case 1, case 2, and case 3). If someone could help work me through this error that would be great! here is the link to the repository: https://github.com/ShivenKhurana1/Detect-to-Protect-App this file with the model code is called SecondView.swift and here is the model info: Input: conv2d_input-> image (color 224x224) Output: Identity -> MultiArray (Float32 1x4)
1
1
176
Apr ’25
Is there an API that allows iOS app developers to leverage Apple Foundation Models to authorize a user's Apple Intelligence extension, chatGPT login account?
Is there an API that allows iOS app developers to leverage Apple Foundation Models to authorize a user's Apple Intelligence extension, chatGPT login account? I'm trying to provide a real-time question feature for chatGPT, a logged-in extension account, while leveraging Apple Intelligence's LLM. Is there an API that also affects the extension login account?
1
0
177
Nov ’25
get error with xcode beta3 :decodingFailure(FoundationModels.LanguageModelSession.GenerationError.Context
@Generable enum Breakfast { case waffles case pancakes case bagels case eggs } do { let session = LanguageModelSession() let userInput = "I want something sweet." let prompt = "Pick the ideal breakfast for request: (userInput)" let response = try await session.respond(to: prompt,generating: Breakfast.self) print(response.content) } catch let error { print(error) } i want to test the @Generable demo but get error with below:decodingFailure(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Failed to convert text into into GeneratedContent\nText: waffles", underlyingErrors: [Swift.DecodingError.dataCorrupted(Swift.DecodingError.Context(codingPath: [], debugDescription: "The given data was not valid JSON.", underlyingError: Optional(Error Domain=NSCocoaErrorDomain Code=3840 "Unexpected character 'w' around line 1, column 1." UserInfo={NSJSONSerializationErrorIndex=0, NSDebugDescription=Unexpected character 'w' around line 1, column 1.})))]))
1
0
120
Jul ’25
CreateML
I'm trying to use the Spatial model to perform Object Tracking on a .usdz file that I create. After loading the file, which I can view correctly in the console, I start the training. Initially, I notice that the disk usage on my PC increases. After several GB, the usage stops, but the training progress remains for hours at 0.00% with the message "About 8hr." How can I understand what the issue is? Has anyone else experienced the same problem? Thanks Diego
1
1
627
Jan ’25
Foundation Models framework dyld symbol errors after macOS 26 Beta 2 - LanguageModelSession constructor missing
Foundation Models framework worked perfectly on macOS 26 Beta 2, but starting from Beta 3 and continuing through Beta 6 (latest), I get dyld symbol errors even with the exact code from Apple's documentation. Environment: macOS 26.0 Beta 6 (25A5351b) Xcode 26 Beta 6 M4 Max MacBook Pro Apple Intelligence enabled and downloaded Error Details: dyld[Process]: Symbol not found: _$s16FoundationModels20LanguageModelSessionC5model10guardrails5tools12instructionsAcA06SystemcD0C_AC10GuardrailsVSayAA4Tool_pGAA12InstructionsVSgtcfC Referenced from: /path/to/app.debug.dylib Expected in: /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels Code Used (Exact from Documentation): import FoundationModels // This worked on Beta 2, crashes on Beta 3+ let model = SystemLanguageModel.default let session = LanguageModelSession(model: model) let response = try await session.respond(to: "Hello") What I've Verified: FoundationModels.framework exists in /System/Library/Frameworks/ Framework is properly linked in Xcode project Apple Intelligence is enabled and working Same code works in older beta versions Issue persists even with completely fresh Xcode projects Analysis: The dyld error suggests the LanguageModelSession(model:) constructor is missing. The symbol shows it's looking for a constructor with parameters (model:guardrails:tools:instructions:), but the documentation still shows the simple (model:) constructor. Questions: Has the LanguageModelSession API changed since Beta 2? Should we now use the constructor with guardrails/tools/instructions parameters? Is this a known issue with recent betas? Are there updated code samples for the current API? Additional Context: This affects both basic SystemLanguageModel usage AND custom adapter loading. The same dyld symbol errors occur when trying to create SystemLanguageModel(adapter: adapter) as well. Any guidance on the correct API usage for current betas would be greatly appreciated. The documentation appears to be out of sync with the actual framework implementation.
1
0
643
Sep ’25
Memory stride warning when loading CoreML models on ANE
When I am doing an uncached load of CoreML model on ANE, I received this warning in Xcode console Type of hiddenStates in function main's I/O contains unknown strides. Using unknown strides for MIL tensor buffers with unknown shapes is not recommended in E5ML. Please use row_alignment_in_bytes property instead. Refer to https://e5-ml.apple.com/more-info/memory-layouts.html for more information. However, the web link does not seem to be working. Where can I find more information about about this and how can I fix it?
1
0
209
Jul ’25
develop app in Europe using image playground
Hi, I want to develop an app which makes use of Image Playground. However, I am located in Europe which makes it impossible for me as Image Playground is not available for me. Even if I would like to distribute the app in the US. Nor the simulator, nor a physical device will always return that support for ImagePlayground is not supported (@Environment(.supportsImagePlayground) private var supportsImagePlayground) How to set my environment such that I can test the feature in my iOS application
1
0
690
Jan ’25
iOS 26 beta breaking my model
I just recently updated to iOS 26 beta (23A5336a) to test an app I am developing I running an MLModel loaded from a .mlmodelc file. On the current iOS version 18.6.2 the model is running as expected with no issues. However on iOS 26 I am now getting error when trying to perform an inference to the model where I pass a camera frame into it. Below is the error I am seeing when I attempt to run an inference. at the bottom it says "Failed with status=0x1d : statusType=0x9: Program Inference error status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model " does this indicate I need to convert my model or something? I don't understand since it runs as normal on iOS 18. Any help getting this to run again would be greatly appreciated. Thank you, processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: Could not process request ret=0x1d lModel=_ANEModel: { modelURL=file:///var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/ : sourceURL=(null) : UUID=46228BFC-19B0-45BF-B18D-4A2942EEC144 : key={"isegment":0,"inputs":{"input":{"shape":[512,512,1,3,1]}},"outputs":{"var_633":{"shape":[512,512,1,19,1]},"94_argmax_out_value":{"shape":[512,512,1,1,1]},"argmax_out":{"shape":[512,512,1,1,1]},"var_637":{"shape":[512,512,1,19,1]}}} : identifierSource=1 : cacheURLIdentifier=01EF2D3DDB9BA8FD1FDE18C7CCDABA1D78C6BD02DC421D37D4E4A9D34B9F8181_93D03B87030C23427646D13E326EC55368695C3F61B2D32264CFC33E02FFD9FF : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=259022032430 : intermediateBufferHandle=13949 : queueDepth=127 } : state=3 : [Espresso::ANERuntimeEngine::__forward_segment 0] evaluate[RealTime]WithModel returned 0; code=8 err=Error Domain=com.apple.appleneuralengine Code=8 "processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error" UserInfo={NSLocalizedDescription=processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error} [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": ANEF error: /private/var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/model.espresso.net, processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1). Error Domain=com.apple.Vision Code=3 "The VNCoreMLTransform request failed" UserInfo={NSLocalizedDescription=The VNCoreMLTransform request failed, NSUnderlyingError=0x114d92940 {Error Domain=com.apple.CoreML Code=0 "Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1)." UserInfo={NSLocalizedDescription=Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).}}}
1
0
1k
Sep ’25
Foundational Model - Image as Input? Timeline
Hi all, I am interested in unlocking unique applications with the new foundational models. I have a few questions regarding the availability of the following features: Image Input: The update in June 2025 mentions "image" 44 times (https://machinelearning.apple.com/research/apple-foundation-models-2025-updates) - however I can't seem to find any information about having images as the input/prompt for the foundational models. When will this be available? I understand that there are existing Vision ML APIs, but I want image input into a multimodal on-device LLM (VLM) instead for features like "Which player is holding the ball in the image", etc (image understanding) Cloud Foundational Model - when will this be available? Thanks! Clement :)
1
0
495
Sep ’25