The What’s New in Create ML session in WWDC24 went into great depth with time-series forecasting models (beginning at: 15:14) and mentioned these new models, capabilities, and tools for iOS 18. So, far, all I can find is API documentation. I don’t see any other session in WWDC24 covering these new time-series forecasting Create ML features.
Is there more substance/documentation on how to use these with Create ML? Maybe I am looking in the wrong place but I am fairly new with ML.
Are there any food truck / donut shop demo/sample code like in the video?
It is of great interest to get ahead of the curve on this within business applications that may take advantage of this with inventory / ordering data.
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Issue type: Bug
TensorFlow metal version: 1.1.1
TensorFlow version: 2.18
OS platform and distribution: MacOS 15.2
Python version: 3.11.11
GPU model and memory: Apple M2 Max GPU 38-cores
Standalone code to reproduce the issue:
import tensorflow as tf
if __name__ == '__main__':
gpus = tf.config.experimental.list_physical_devices('GPU')
print(gpus)
Current behavior
Apple silicone GPU with tensorflow-metal==1.1.0 and python 3.11 works fine with tensorboard==2.17.0
This is normal output:
/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/bin/python /Users/mspanchenko/VSCode/cryptoNN/ml/core_second_window/test_tensorflow_gpus.py
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Process finished with exit code 0
But if I upgrade tensorflow to 2.18 I'll have error:
/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/bin/python /Users/mspanchenko/VSCode/cryptoNN/ml/core_second_window/test_tensorflow_gpus.py
Traceback (most recent call last):
File "/Users/mspanchenko/VSCode/cryptoNN/ml/core_second_window/test_tensorflow_gpus.py", line 1, in <module>
import tensorflow as tf
File "/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow/__init__.py", line 437, in <module>
_ll.load_library(_plugin_dir)
File "/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: __ZN3tsl8internal10LogMessageC1EPKcii
Referenced from: <D2EF42E3-3A7F-39DD-9982-FB6BCDC2853C> /Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Expected in: <2814A58E-D752-317B-8040-131217E2F9AA> /Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so
Process finished with exit code 1
We’ve encountered what appears to be a CoreML regression between macOS 26.0.1 and macOS 26.1 Beta.
In macOS 26.0.1, CoreML models run and produce correct results. However, in macOS 26.1 Beta, the same models produce scrambled or corrupted outputs, suggesting that tensor memory is being read or written incorrectly. The behavior is consistent with a low-level stride or pointer arithmetic issue — for example, using 16-bit strides on 32-bit data or other mismatches in tensor layout handling.
Reproduction
Install ON1 Photo RAW 2026 or ON1 Resize 2026 on macOS 26.0.1.
Use the newest Highest Quality resize model, which is Stable Diffusion–based and runs through CoreML.
Observe correct, high-quality results.
Upgrade to macOS 26.1 Beta and run the same operation again.
The output becomes visually scrambled or corrupted.
We are also seeing similar issues with another Stable Diffusion UNet model that previously worked correctly on macOS 26.0.1. This suggests the regression may affect multiple diffusion-style architectures, likely due to a change in CoreML’s tensor stride, layout computation, or memory alignment between these versions.
Notes
The affected models are exported using standard CoreML conversion pipelines.
No custom operators or third-party CoreML runtime layers are used.
The issue reproduces consistently across multiple machines.
It would be helpful to know if there were changes to CoreML’s tensor layout, precision handling, or MLCompute backend between macOS 26.0.1 and 26.1 Beta, or if this is a known regression in the current beta.
In the name of God, please allow initializing GeneratedContent from an array of key-value pairs. It’s literally the same thing KeyValuePairs uses internally, but it would let us initialize structure-like GeneratedContent from dynamic data without resorting to unsafeBitCast hacks.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
In an App Playground Xcode project there is no Targets menu in the UI, When I try use the model, it says the model is not in scope. When I did it in a regular project it automatically generated a Swift Class and had no erorrs because it had a target but I see no place to add a target on an App playground.
Greetings! I was trying to get a response from the LanguageModelSession but I just keep getting the following:
Error getting response: Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides}
This occurs both in macOS 15.5 running the new Xcode beta with an iOS 26 simulator, and also on a macOS 26 with Xcode beta. The simulators are both Pro iPhone 16s.
I was wondering if anyone had any advice?
I'm attempting to run a basic Foundation Model prototype in Xcode 26, but I'm getting the error below, using the iPhone 16 simulator with iOS 26. Should these models be working yet? Do I need to be running macOS 26 for these to work? (I hope that's not it)
Error:
Passing along Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides} in response to ExecuteRequest
Playground to reproduce:
#Playground {
let session = LanguageModelSession()
do {
let response = try await session.respond(to: "What's happening?")
} catch {
let error = error
}
}
I've been trying to get some basic models to work on an M2 with tensor metal 1.2 and keras 2.15 and 2.18 and they all fail to work as expected.
I'm running models copy/pasted from common tutorials like Jason Brownlee ML Mastery Object Classification tutorial using CIFAR-10. When run with the GPU I can't get any reasonable results. Under keras 2.15 the best validation accuracy ends up being around 10-15%. Under keras 2.18, the validation goes off the rails around epoch 5 with wildly low accuracy and loss values that are reported as "nan".
Epoch 4/25
782/782: 19s 24ms/step - accuracy: 0.3450 - loss: 2.8925 - val_accuracy: 0.2992 - val_loss: 1.9869
Epoch 5/25
782/782: 19s 24ms/step - accuracy: 0.2553 - loss: nan - val_accuracy: 0.0000e+00 - val_loss: nan
Running the same code on the CPU using keras 2.15 using tf.config.experimental.set_visible_devices([], 'GPU') yields a reasonable result with the validation accuracy around 75% as expected. Running the same code on keras 2.15 on a linux instance with just the CPU provides similar results.
The tutorial can be found here:
https://machinelearningmastery.com/object-recognition-convolutional-neural-networks-keras-deep-learning-library/
The only places I've deviated from the provided tutorial is using
sdg = tf.keras.optimizers.legacy.SGD(learning_rate=lrate, momentum=0.9, nesterov=False)
I did this at the advice of the warning:
WARNING:absl:At this time, the v2.11+ optimizer `tf.keras.optimizers.SGD` runs slowly on M1/M2 Macs, please use the legacy Keras optimizer instead, located at `tf.keras.optimizers.legacy.SGD`.
Is there something special that I need to do to make this work? I've followed the instructions here: https://developer.apple.com/metal/tensorflow-plugin/
I've purged the venv a few times and started from scratch, but all with similarly terrible results.
Here are my platform details:
Chip: Apple M2
Memory: 16 GB
macOS : Sequoia 15.2
Python venv: 3.11
Jupyter Lab Version: 4.3.3
TensorFlow versions: 2.15, 2.18
tensorflow-metal: 1.2.0
Thanks for any assistance or advice.
Hello,
Are there any plans to compile a python 3.13 version of tensorflow-metal?
Just got my new Mac mini and the automatically installed version of python installed by brew is python 3.13 and while if I was in a hurry, I could manage to get python 3.12 installed and use the corresponding tensorflow-metal version but I'm not in a hurry.
Many thanks,
Alan
My iOS app supports iOS 18, and I’m using an encrypted CoreML model secured with a key generated from Xcode.
Every few months (around every 3 months), the encrypted model fails to load for both me and my users. When I investigate, I find this error:
coreml Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID
To temporarily fix it, I delete the old key, generate a new one, re-encrypt the model, and submit an app update. This resolves the issue, but only for a while.
This is a terrible experience for users and obviously not a sustainable solution.
I want to understand:
Why is this happening?
Is there a known expiration or invalidation policy for CoreML encryption keys?
How can I prevent this issue permanently?
Any insights or official guidance would be really appreciated.
Problem
I have set SWIFT_UPCOMING_FEATURE_EXISTENTIAL_ANY at Build Settings > Swift Compiler - Upcoming Features to true to support this existential any proposal.
Then following errors appears in the MLModel class, but this is an auto-generated file, so I don't know how to deal with it.
Use of protocol 'MLFeatureProvider' as a type must be written 'any MLFeatureProvider'
Use of protocol 'Error' as a type must be written 'any Error'
environment
Xcode 16.0
Xcode 16.1 Beta 2
What I tried
Delete cache of DerivedData and regenerate MLModel class files
I also tried using DepthAnythingV2SmallF16P6.mlpackage to verify if there is a problem with my mlmodel
I tried the above after setting up Swift6 in Xcode
I also used coremlc to generate MLModel class files with Swift6 specified by command.
I am writing to inquire about content exclusion capabilities within Apple Intelligence, particularly regarding the use of configuration files such as .aiignore or .aiexclude—similar to what exists in other AI-assisted coding tools. These mechanisms are highly valuable in managing what content AI systems can access, especially in environments that involve sensitive code or proprietary frameworks.
I would appreciate it if anyone could clarify whether Apple Intelligence currently supports any exclusion configuration for AI-assisted features. If so, could you kindly provide documentation or guidance on how developers can implement these controls?
If not, Is there any plan to include such feature in future updates?
Is there an API that allows iOS app developers to leverage Apple Foundation Models to authorize a user's Apple Intelligence extension, chatGPT login account?
I'm trying to provide a real-time question feature for chatGPT, a logged-in extension account, while leveraging Apple Intelligence's LLM. Is there an API that also affects the extension login account?
Hi
For certain tasks, such as qualitative analysis or tagging, it is advisable to provide the AI with the option to respond with a joker / wild card answer when it encounters difficulties in tagging or scoring. For instance, you can include this slot in the prompt as follows:
output must be "not data to score" when there isn't information to score.
In the absence of these types of slots, AI trends to provide a solution even when there is insufficient information.
Foundations Models are told to be prompted with simple prompts. I wonder: Is recommended keep this slot though adds verbose complexity? Is the best place the comment of a guided attribute? other tips?
Another use case is when you want the AI to be tied to the information provided in the prompt and not take information from its data set. What is the best approach to this purpose?
Thanks in advance for any suggestion.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi, I'm looking for the best way to use MLX models, particularly those I've fine-tuned, within a React Native application on iOS devices. Is there a recommended integration path or specific API for bridging MLX's capabilities to React Native for deployment on iPhones and iPads?
Context
I trained a LoRA adapter for Apple’s on-device language model using the Foundation Models Adapter Training Toolkit v0.2.0 on macOS 26 beta 4. Although training completes successfully, loading the resulting .fmadapter package fails with:
Adapter is not compatible with the current system base model.
What I’ve Observed,
Hard-coded Signature: In export/constants.py, the toolkit sets,
BASE_SIGNATURE = "9799725ff8e851184037110b422d891ad3b92ec1"
Metadata Injection: The export_fmadapter.py script writes this value into the adapter’s metadata:
self_dict[MetadataKeys.BASE_SIGNATURE] = BASE_SIGNATURE
Compatibility Check: At runtime, the Foundation Models framework compares the adapter’s baseModelSignature against the OS’s system model signature, and reports compatibleAdapterNotFound if they don’t match—without revealing the expected signature.
Questions
Signature Generation - What exactly does the toolkit hash to derive BASE_SIGNATURE? Is it a straight SHA-1 of base-model.pt, or is there an additional transformation?
Recomputing for Beta 4 - Is there a way to locally compute the correct signature for the macOS 26 beta 4 system model?
Toolkit Updates - Will Apple release Adapter Training Toolkit v0.3.0 with an updated BASE_SIGNATURE for beta 4, or is there an alternative workaround to generate it myself?
Any guidance on how the Foundation Models framework derives and verifies the base model signature—or how to regenerate it for beta 4—would be greatly appreciated.
I’m trying to group my EntityPropertyQuery selection into sections as well as making it searchable.
I know that the EntityStringQuery is used to perform the text search via entities(matching string: String). That works well enough and results in this modal:
Though, when I’m using a DynamicOptionsProvider to section my EntityPropertyQuery, it doesn’t allow for searching anymore and simply opens the sectioned list in a menu like so:
How can I combine both? I’ve seen it in other apps, but can’t figure out why my code doesn’t allow to section the results and make it searchable? Any ideas?
My code (simplified)
struct MyIntent: AppIntent {
@Parameter(title: "Meter"),
optionsProvider: MyOptionsProvider())
var meter: MyIntentEntity?
// …
struct MyOptionsProvider: DynamicOptionsProvider {
func results() async throws -> ItemCollection<MyIntentEntity> {
// Get All Data
let allData = try IntentsDataHandler.shared.getEntities()
// Create Arrays for Sections
let fooEntities = allData.filter { $0.type == .foo }
let barEntities = allData.filter { $0.type == .bar }
return ItemCollection(sections: [
ItemSection("Foo",
items: fooEntities),
ItemSection("Bar",
items: barEntities)
])
}
}
struct MeterIntentQuery: EntityStringQuery {
// entities(for identifiers: [UUID]) and suggestedEntities() functions
func entities(matching string: String) async throws -> [MyIntentEntity] {
// Fetch All Data
let allData = try IntentsDataHandler.shared.getEntities()
// Filter Data by String
let matchingData = allData.filter { data in
return data.title.localizedCaseInsensitiveContains(string))
}
return matchingData
}
}
I used Yolo5-11 and while performing great detecting balls lets say 5-10ft away in 1920 resolution and even in 640 it really is taking toll on my app performance.
When I use Create ML it outputs all in 415x which is probably the reason why it does not detect objects from far.
What can I do to preserve some energy ?
My model is used with about 1K pictures 200 each test and validate, and from close up and far.
Topic:
Machine Learning & AI
SubTopic:
Create ML
Is there any way to ensure iOS apps we develop using Foundation Models can only be purchasable/downloadable on App Store by folks with capable devices? I would've thought there would be a Required Capabilities that App Store would hook into, but I don't seem to see it in the documentation here: https://developer.apple.com/documentation/bundleresources/information-property-list/uirequireddevicecapabilities
The closest seems to be iphone-performance-gaming-tier as that seems to target all M1 and above chips on iPhone & iPad. There is an ipad-minimum-performance-m1 that would more reasonably seem to ensure Foundation Models is likely available, but that doesn't help with iPhone. So far, it seems the only path would be to set Minimum Deployment to iOS 26 and add iphone-performance-gaming-tier as a required capability, but I'm a bit worried that capability might diverge in the future from what's Foundation Model / Apple Intelligence capable.
While I understand for the majority of apps they'll want to just selectively add in Apple Intelligence features and so can be usable by folks whose devices don't support it, the app experience I'm building doesn't make sense without the Foundation Models being available and I'd rather not have a large number of users downloading the app to be told "Sorry, you're not Apple Intelligence capable"
I am watching a few WWDC sessions on Foundation Model and its usage and it looks pretty cool.
I was wondering if it is possible to perform RAG on the user documents on the devices and entuallly on iCloud...
Let's say I have a lot of pages documents about me and I want the Foundation model to access those information on the documents to answer questions about me that can be retrieved from the documents.
How can this be done ?
Thanks
Topic:
Machine Learning & AI
SubTopic:
Foundation Models