Greetings,
Ive been exerimenting with the new Apple intelligence chat. I want to be able to use my custom LLM and I made that work (I can chat back and forward from the left panel with my server) but I cannot find out how to change the editor contents like chatgpt does.
chatgpt is able to change the current editor and, seems like, all files in the pbx. I tried to catch the call with charles with no success.
In the OpenIA platform docs it doesnt mention anything that could change the code shown.
does anyone know how to achieve this? Is the apple intelliece documentation lacking this features and will it be completed soon? will this features even be open for developers?
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
My iOS app supports iOS 18, and I’m using an encrypted CoreML model secured with a key generated from Xcode.
Every few months (around every 3 months), the encrypted model fails to load for both me and my users. When I investigate, I find this error:
coreml Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID
To temporarily fix it, I delete the old key, generate a new one, re-encrypt the model, and submit an app update. This resolves the issue, but only for a while.
This is a terrible experience for users and obviously not a sustainable solution.
I want to understand:
Why is this happening?
Is there a known expiration or invalidation policy for CoreML encryption keys?
How can I prevent this issue permanently?
Any insights or official guidance would be really appreciated.
We have suddenly encountered a serious issue: our local ML models are no longer being decrypted.
Everything was set up according to the guide at https://developer.apple.com/documentation/coreml/generating-a-model-encryption-key and had been working in production, but yesterday we started receiving the following error:
Error Domain=com.apple.CoreML Code=8 "Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID." UserInfo={NSLocalizedDescription=Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID.}
We haven’t changed anything in our code. This started spontaneously affecting users of the release version as of yesterday. It also no longer works locally — we receive the same error at the moment the autogenerated function is called:
class func load(configuration: MLModelConfiguration = MLModelConfiguration(), completionHandler handler: @escaping (Swift.Result<ZingPDModel, Error>) -> Void)
I assume that I can generate a new key through Xcode, integrate it in place of the old one, and it might start working again. However, this won’t affect existing users until they update the app.
Could the issue be on Apple’s infrastructure side?
Topic:
Machine Learning & AI
SubTopic:
Core ML
Hey,
When generating responses with structured output and non-streaming API, it sometimes takes 3s, sometimes 10-20s. I am firing that request subsequently while testing the app.
Is this by design, or any place I can learn more about what contributes to such variation?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hey,
I receive GenerableContent as follows:
let response = try await session.respond(to: "", schema: generationSchema)
And it wraps GeneratedJSON which seems to be private.
What is the best way to get a string / raw value out of it? I noticed it could theoretically be accessed via transcriptEntries but it's not ideal.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
While runninf Apple Foundation Model in iPhone simulator, I got this error:
IPC error: Underlying connection interrupted
What does this mean? Related to foundation model?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I have been working on a small CV program, which uses fine-tuned U2Netp model converted by coremltools 8.3.0 from PyTorch.
It works well on my iPhone (with iOS version 18.5) and my Macbook (with MacOS version 15.3.1). But it fails to load after I upgraded Macbook to MacOS version 15.5.
I have attached console log when loading this model.
Unable to load MPSGraphExecutable from path /Users/yongzhang/Library/Caches/swiftmetal/com.apple.e5rt.e5bundlecache/24F74/E051B28C6957815C140A86134D673B5C015E79A1460E9B54B8764F659FDCE645/16FA8CF2CDE66C0C427F4B51BBA82C38ACC44A514CCA396FD7B281AAC087AB2F.bundle/H14C.bundle/main/main_mps_graph/main_mps_graph.mpsgraphpackage @ GetMPSGraphExecutable
E5RT: Unable to load MPSGraphExecutable from path /Users/yongzhang/Library/Caches/swiftmetal/com.apple.e5rt.e5bundlecache/24F74/E051B28C6957815C140A86134D673B5C015E79A1460E9B54B8764F659FDCE645/16FA8CF2CDE66C0C427F4B51BBA82C38ACC44A514CCA396FD7B281AAC087AB2F.bundle/H14C.bundle/main/main_mps_graph/main_mps_graph.mpsgraphpackage (13)
Unable to load MPSGraphExecutable from path /Users/yongzhang/Library/Caches/swiftmetal/com.apple.e5rt.e5bundlecache/24F74/E051B28C6957815C140A86134D673B5C015E79A1460E9B54B8764F659FDCE645/16FA8CF2CDE66C0C427F4B51BBA82C38ACC44A514CCA396FD7B281AAC087AB2F.bundle/H14C.bundle/main/main_mps_graph/main_mps_graph.mpsgraphpackage @ GetMPSGraphExecutable
E5RT: Unable to load MPSGraphExecutable from path /Users/yongzhang/Library/Caches/swiftmetal/com.apple.e5rt.e5bundlecache/24F74/E051B28C6957815C140A86134D673B5C015E79A1460E9B54B8764F659FDCE645/16FA8CF2CDE66C0C427F4B51BBA82C38ACC44A514CCA396FD7B281AAC087AB2F.bundle/H14C.bundle/main/main_mps_graph/main_mps_graph.mpsgraphpackage (13)
Failure translating MIL->EIR network: Espresso exception: "Network translation error": MIL->EIR translation error at /Users/yongzhang/CLionProjects/ImageSimilarity/models/compiled/u2netp.mlmodelc/model.mil:1557:12: Parameter binding for axes does not exist.
[Espresso::handle_ex_plan] exception=Espresso exception: "Network translation error": MIL->EIR translation error at /Users/yongzhang/CLionProjects/ImageSimilarity/models/compiled/u2netp.mlmodelc/model.mil:1557:12: Parameter binding for axes does not exist. status=-14
Failed to build the model execution plan using a model architecture file '/Users/yongzhang/CLionProjects/ImageSimilarity/models/compiled/u2netp.mlmodelc/model.mil' with error code: -14.
Topic:
Machine Learning & AI
SubTopic:
Create ML
Hi everyone,
I’m currently exploring the use of Foundation models on Apple platforms to build a chatbot-style assistant within an app. While the integration part is straightforward using the new FoundationModel APIs, I’m trying to figure out how to control the assistant’s responses more tightly — particularly:
Ensuring the assistant adheres to a specific tone, context, or domain (e.g. hospitality, healthcare, etc.)
Preventing hallucinations or unrelated outputs
Constraining responses based on app-specific rules, structured data, or recent interactions
I’ve experimented with prompt, systemMessage, and few-shot examples to steer outputs, but even with carefully generated prompts, the model occasionally produces incorrect or out-of-scope responses.
Additionally, when using multiple tools, I'm unsure how best to structure the setup so the model can select the correct pathway/tool and respond appropriately. Is there a recommended approach to guiding the model's decision-making when several tools or structured contexts are involved?
Looking forward to hearing your thoughts or being pointed toward related WWDC sessions, Apple docs, or sample projects.
I’m developing an activity classifier that I’d like to input using the JSON format of CoreMotion data.
I am getting the error:
Unable to parse /Users/DewG/Downloads/Testing/Step1/Testing.json. It does not appear to be in JSON record format. A SequenceType of dictionaries is expected
I've verified that the format I am using is JSON via various JSON validators, so I am expecting I'm just holding it wrong. Is there an example of a JSON file with CoreMotion data that I can model after?
Is there anywhere we can reference error codes? I'm getting this error: "The operation couldn’t be completed. (FoundationModels.LanguageModelSession.GenerationError error 4.)" and I have no idea of what it means or what to attempt to fix.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Machine Learning
Create ML
Apple Intelligence
I am writing a custom package wrapping Foundation Models which provides a chain-of-thought with intermittent self-evaluation among other things. At first I was designing this package with the command line in mind, but after seeing how well it augments the models and makes them more intelligent I wanted to try and build a SwiftUI wrapper around the package.
When I started I was using synchronous generation rather than streaming, but to give the best user experience (as I've seen in the WWDC sessions) it is necessary to provide constant feedback to the user that something is happening.
I have created a super simplified example of my setup so it's easier to understand.
First, there is the Reasoning conversation item, which can be converted to an XML representation which is then fed back into the model (I've found XML works best for structured input)
public typealias ConversationContext = XMLDocument
extension ConversationContext {
public func toPlainText() -> String {
return xmlString(options: [.nodePrettyPrint])
}
}
/// Represents a reasoning item in a conversation, which includes a title and reasoning content.
/// Reasoning items are used to provide detailed explanations or justifications for certain decisions or responses within a conversation.
@Generable(description: "A reasoning item in a conversation, containing content and a title.")
struct ConversationReasoningItem: ConversationItem {
@Guide(description: "The content of the reasoning item, which is your thinking process or explanation")
public var reasoningContent: String
@Guide(description: "A short summary of the reasoning content, digestible in an interface.")
public var title: String
@Guide(description: "Indicates whether reasoning is complete")
public var done: Bool
}
extension ConversationReasoningItem: ConversationContextProvider {
public func toContext() -> ConversationContext {
// <ReasoningItem title="${title}">
// ${reasoningContent}
// </ReasoningItem>
let root = XMLElement(name: "ReasoningItem")
root.addAttribute(XMLNode.attribute(withName: "title", stringValue: title) as! XMLNode)
root.stringValue = reasoningContent
return ConversationContext(rootElement: root)
}
}
Then there is the generator, which creates a reasoning item from a user query and previously generated items:
struct ReasoningItemGenerator {
var instructions: String {
"""
<omitted for brevity>
"""
}
func generate(from input: (String, [ConversationReasoningItem])) async throws -> sending LanguageModelSession.ResponseStream<ConversationReasoningItem> {
let session = LanguageModelSession(instructions: instructions)
// build the context for the reasoning item out of the user's query and the previous reasoning items
let userQuery = "User's query: \(input.0)"
let reasoningItemsText = input.1.map { $0.toContext().toPlainText() }.joined(separator: "\n")
let context = userQuery + "\n" + reasoningItemsText
let reasoningItemResponse = try await session.streamResponse(
to: context, generating: ConversationReasoningItem.self)
return reasoningItemResponse
}
}
I'm not sure if returning LanguageModelSession.ResponseStream<ConversationReasoningItem> is the right move, I am just trying to imitate what session.streamResponse returns.
Then there is the orchestrator, which I can't figure out. It receives the streamed ConversationReasoningItems from the Generator and is responsible for streaming those to SwiftUI later and also for evaluating each reasoning item after it is complete to see if it needs to be regenerated (to keep the model on-track). I want the users of the orchestrator to receive partially generated reasoning items as they are being generated by the generator. Later, when they finish, if the evaluation passes, the item is kept, but if it fails, the reasoning item should be removed from the stream before a new one is generated. So in-flight reasoning items should be outputted aggresively.
I really am having trouble figuring this out so if someone with more knowledge about asynchronous stuff in Swift, or- even better- someone who has worked on the Foundation Models framework could point me in the right direction, that would be awesome!
Hi all, I'm working on an app that utilizes the FoundationModels found in iOS 26. I updated my phone to iOS 26 beta 3 and am now receiving the following error when trying to run code that worked in beta 2:
Al Error: The operation couldn't be completed. (FoundationModels.LanguageModelSession.Genera-
tionError error 2.)
I admit I'm a bit of a new developer, but any idea if this is an issue with beta 3 or work that I'll need to do to adapt my code to some changes in the AI API?
Thank you!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Testing Foundation Models framework with a health-focused recipe generation app. The on-device approach is appealing but performance is rough. Taking 20+ seconds just to get recipe name and description. Same content from Claude API: 4 seconds.
I know it's beta and on-device has different tradeoffs, but this is approaching unusable territory for real-time user experience. The streaming helps psychologically but doesn't mask the underlying latency.The privacy/cost benefits are compelling but not if users abandon the feature before it completes.
Anyone else seeing similar performance? Is this expected for beta, or are there optimization techniques I'm missing?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
When context window size exceeded, this error is not called (instead another error has shown up) to handle new session.
LanguageModelSession.GenerationError.exceededContextWindowSize
Or am I doing things wrong?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I've spent way too long today trying to convert an Object Detection TensorFlow2 model to a CoreML object classifier (with bounding boxes, labels and probability score)
The 'SSD MobileNet v2 320x320' is here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md
And I've been following all sorts of posts and ChatGPT
https://apple.github.io/coremltools/docs-guides/source/tensorflow-2.html#convert-a-tensorflow-concrete-function
https://developer.apple.com/videos/play/wwdc2020/10153/?time=402
To convert it.
I keep hitting the same errors though, mostly around:
NotImplementedError: Expected model format: [SavedModel | concrete_function | tf.keras.Model | .h5 | GraphDef], got <ConcreteFunction signature_wrapper(input_tensor) at 0x366B87790>
I've had varying success including missing output labels/predictions.
But I simply want to create the CoreML model with all the right inputs and outputs (including correct names) as detailed in the docs here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tf2.md
It goes without saying I don't have much (any) experience with this stuff including Python so the whole thing's been a bit of a headache.
If anyone is able to help that would be great.
FWIW I'm not attached to any one specific model, but what I do need at minimum is a CoreML model that can detect objects (has to at least include lights and lamps) within a live video image, detecting where in the image the object is.
The simplest script I have looks like this:
import coremltools as ct
import tensorflow as tf
model = tf.saved_model.load("~/tf_models/ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model")
concrete_func = model.signatures[tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
mlmodel = ct.convert(
concrete_func,
source="tensorflow",
inputs=[ct.TensorType(shape=(1, 320, 320, 3))]
)
mlmodel.save("YourModel.mlpackage", save_format="mlpackage")
Hi all,
I’m encountering an issue when trying to run Apple Foundation Models in a blank project targeting iOS 26.
Below are the details:
Xcode: Latest version with iOS 26 SDK
macOS: macOS 26 Tahoe (installed on main disk)
Mac: 16” MacBook Pro with M2 Pro chip
Apple Intelligence: Available and functional on this machine
Problem:
I created a new blank iOS project, set the deployment target to iOS 26, and ran the following minimal code using Foundation Models. However, I get no response at all in the output - not even an error. The app runs, but the model does not produce any output.
#Playground {
let session = LanguageModelSession()
let response = try await session.respond(to: "Tell me a story")
}
Then, I tried to catch an error with this code:
#Playground {
let session = LanguageModelSession()
do {
let response = try await session.respond(to: "Tell me a story")
print(response)
} catch {
print("Failed to get response:", error)
}
print("This line, never gets executed")
}
And got these results:
I’ve done further testing and discovered something important:
I tried running the Code Along sample project, and there the #Playground macro worked without issues. The only significant difference I noticed was the Canvas run destination:
In my original project, I was using iPhone 16 Pro (iOS 26) as the run target in Canvas. Apple Intelligence was enabled on the simulator, but no response was returned when executing the prompt.
In the sample project, the Canvas was running on My Mac.
I attempted to match that setup, but at first, my destination was My Mac (Designed for iPad), which still didn’t work. The macro finally executed properly once I switched to My Mac (AppKit).
So the question is ... it seems that for now, Foundation Models and the #Playground macro only run correctly when the canvas or destination is set to “My Mac (AppKit)”?
I'm new to Swift and was hoping the Playground would support loading adaptors. When I tried, I got a permissions error - thinking it's because it's not in the project and Playgrounds don't like going outside the project?
A tutorial and some sample code would be helpful.
Also some benchmarks on how long it's expected to take. Selfishly I'm on an M2 Mac Mini.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Download the Foundation Models Adaptor Training Toolkit
Hi, after I clicked on the download button, I was redirected to this page https://developer.apple.com and did not download the toolkit.
Hello,
We have been encountering a persistent crash in our application, which is deployed exclusively on iPad devices. The crash occurs in the following code block:
let requestHandler = ImageRequestHandler(paddedImage)
var request = CoreMLRequest(model: model)
request.cropAndScaleAction = .scaleToFit
let results = try await requestHandler.perform(request)
The client using this code is wrapped inside an actor, following Swift concurrency principles.
The issue has been consistently reproduced across multiple iPadOS versions, including:
iPad OS - 18.4.0
iPad OS - 18.4.1
iPad OS - 18.5.0
This is the crash log -
Crashed: com.apple.VN.detectorSyncTasksQueue.VNCoreMLTransformer
0 libobjc.A.dylib 0x7b98 objc_retain + 16
1 libobjc.A.dylib 0x7b98 objc_retain_x0 + 16
2 libobjc.A.dylib 0xbf18 objc_getProperty + 100
3 Vision 0x326300 -[VNCoreMLModel predictWithCVPixelBuffer:options:error:] + 148
4 Vision 0x3273b0 -[VNCoreMLTransformer processRegionOfInterest:croppedPixelBuffer:options:qosClass:warningRecorder:error:progressHandler:] + 748
5 Vision 0x2ccdcc __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_5 + 132
6 Vision 0x14600 VNExecuteBlock + 80
7 Vision 0x14580 __76+[VNDetector runSuccessReportingBlockSynchronously:detector:qosClass:error:]_block_invoke + 56
8 libdispatch.dylib 0x6c98 _dispatch_block_sync_invoke + 240
9 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16
10 libdispatch.dylib 0x11728 _dispatch_lane_barrier_sync_invoke_and_complete + 56
11 libdispatch.dylib 0x7fac _dispatch_sync_block_with_privdata + 452
12 Vision 0x14110 -[VNControlledCapacityTasksQueue dispatchSyncByPreservingQueueCapacity:] + 60
13 Vision 0x13ffc +[VNDetector runSuccessReportingBlockSynchronously:detector:qosClass:error:] + 324
14 Vision 0x2ccc80 __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_4 + 336
15 Vision 0x14600 VNExecuteBlock + 80
16 Vision 0x2cc98c __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_3 + 256
17 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16
18 libdispatch.dylib 0x6ab0 _dispatch_block_invoke_direct + 284
19 Vision 0x2cc454 -[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:] + 632
20 Vision 0x2cd14c __111-[VNDetector processUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke + 124
21 Vision 0x14600 VNExecuteBlock + 80
22 Vision 0x2ccfbc -[VNDetector processUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:] + 340
23 Vision 0x125410 __swift_memcpy112_8 + 4852
24 libswift_Concurrency.dylib 0x5c134 swift::runJobInEstablishedExecutorContext(swift::Job*) + 292
25 libswift_Concurrency.dylib 0x5d5c8 swift_job_runImpl(swift::Job*, swift::SerialExecutorRef) + 156
26 libdispatch.dylib 0x13db0 _dispatch_root_queue_drain + 364
27 libdispatch.dylib 0x1454c _dispatch_worker_thread2 + 156
28 libsystem_pthread.dylib 0x9d0 _pthread_wqthread + 232
29 libsystem_pthread.dylib 0xaac start_wqthread + 8
We found an issue similar to us - https://developer.apple.com/forums/thread/770771.
But the crash logs are quite different, we believe this warrants further investigation to better understand the root cause and potential mitigation strategies.
Please let us know if any additional information would help diagnose this issue.
I'm using a custom create ML model to classify the movement of a user's hand in a game,
The classifier has 3 different spell movements, but my code constantly predicts all of them at an equal 1/3 probability regardless of movement which leads me to believe my code isn't correct (as opposed to the model) which in CreateML at least gives me a heavily weighted prediction
My code is below.
On adding debug prints everywhere all the data looks good to me and matches similar to my test CSV data
So I'm thinking my issue must be in the setup of my model code?
/// Feeds samples into the model and keeps a sliding window of the last N frames.
final class WandGestureStreamer {
static let shared = WandGestureStreamer()
private let model: SpellActivityClassifier
private var samples: [Transform] = []
private let windowSize = 100 // number of frames the model expects
/// RNN hidden state passed between inferences
private var stateIn: MLMultiArray
/// Last transform dropped from the window for continuity
private var lastDropped: Transform?
private init() {
let config = MLModelConfiguration()
self.model = try! SpellActivityClassifier(configuration: config)
// Initialize stateIn to the model’s required shape
let constraint = self.model.model.modelDescription
.inputDescriptionsByName["stateIn"]!
.multiArrayConstraint!
self.stateIn = try! MLMultiArray(shape: constraint.shape, dataType: .double)
}
/// Call once per frame with the latest wand position (or any feature vector).
func appendSample(_ sample: Transform) {
samples.append(sample)
// drop oldest frame if over capacity, retaining it for delta at window start
if samples.count > windowSize {
lastDropped = samples.removeFirst()
}
}
func classifyIfReady(threshold: Double = 0.6) -> (label: String, confidence: Double)? {
guard samples.count == windowSize else { return nil }
do {
let input = try makeInput(initialState: stateIn)
let output = try model.prediction(input: input)
// Save state for continuity
stateIn = output.stateOut
let best = output.label
let conf = output.labelProbability[best] ?? 0
// If you’ve recognized a gesture with high confidence:
if conf > threshold {
return (best, conf)
} else {
return nil
}
} catch {
print("Error", error.localizedDescription, error)
return nil
}
}
/// Constructs a SpellActivityClassifierInput from recorded wand transforms.
func makeInput(initialState: MLMultiArray) throws -> SpellActivityClassifierInput {
let count = samples.count as NSNumber
let shape = [count]
let timeArr = try MLMultiArray(shape: shape, dataType: .double)
let dxArr = try MLMultiArray(shape: shape, dataType: .double)
let dyArr = try MLMultiArray(shape: shape, dataType: .double)
let dzArr = try MLMultiArray(shape: shape, dataType: .double)
let rwArr = try MLMultiArray(shape: shape, dataType: .double)
let rxArr = try MLMultiArray(shape: shape, dataType: .double)
let ryArr = try MLMultiArray(shape: shape, dataType: .double)
let rzArr = try MLMultiArray(shape: shape, dataType: .double)
for (i, sample) in samples.enumerated() {
let previousSample = i > 0 ? samples[i - 1] : lastDropped
let model = WandMovementRecording.DataModel(transform: sample, previous: previousSample)
// print("model", model)
timeArr[i] = NSNumber(value: model.timestamp)
dxArr[i] = NSNumber(value: model.dx)
dyArr[i] = NSNumber(value: model.dy)
dzArr[i] = NSNumber(value: model.dz)
let rot = model.rotation
rwArr[i] = NSNumber(value: rot.w)
rxArr[i] = NSNumber(value: rot.x)
ryArr[i] = NSNumber(value: rot.y)
rzArr[i] = NSNumber(value: rot.z)
}
return SpellActivityClassifierInput(
dx: dxArr, dy: dyArr, dz: dzArr,
rotation_w: rwArr, rotation_x: rxArr, rotation_y: ryArr, rotation_z: rzArr,
timestamp: timeArr,
stateIn: initialState
)
}
}