Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Accessing Apple Intelligence APIs: Custom Prompt Support and Inference Capabilities
Hello Apple Developer Community, I'm exploring the integration of Apple Intelligence features into my mobile application and have a couple of questions regarding the current and upcoming API capabilities: Custom Prompt Support: Is there a way to pass custom prompts to Apple Intelligence to generate specific inferences? For instance, can we provide a unique prompt to the Writing Tools or Image Playground APIs to obtain tailored outputs? Direct Inference Capabilities: Beyond the predefined functionalities like text rewriting or image generation, does Apple Intelligence offer APIs that allow for more generalized inference tasks based on custom inputs? I understand that Apple has provided APIs such as Writing Tools, Image Playground, and Genmoji. However, I'm interested in understanding the extent of customization and flexibility these APIs offer, especially concerning custom prompts and generalized inference. Additionally, are there any plans or timelines for expanding these capabilities, perhaps with the introduction of new SDKs or frameworks that allow deeper integration and customization? Any insights, documentation links, or experiences shared would be greatly appreciated. Thank you in advance for your assistance!
3
0
341
Jun ’25
Help with dates in Foundation Model custom Tool
I have an app that stores lots of data that is of interest to the user. Analogies would be the Photos apps or the Health app. I'm trying to use the Foundation Models framework to allow users to surface information they find interesting using natural language, for example, "Tell me about the widgets from yesterday" or "Tell me about the widgets for the last 3 days". Specifically, I'm trying to get a date range passed down to the Tool so that I can pull the relevant widgets from the database in the call function. What is the right way to set up the Arguments to get at a date range?
3
0
722
Dec ’25
Does Image Playground is On-device + Private Cloud ?
Apple's Image Playground primarily performs image generation on-device, but can use secure Private Cloud Compute for more complex requests that require larger models. Private Cloud Compute (PCC) For more complex tasks that require greater computational power than the device can provide, Image Playground leverages Apple's Private Cloud Compute. This system extends the privacy and security of the device to the cloud: Secure Environment: PCC runs on Apple silicon servers and uses a secure enclave to protect data, ensuring requests are processed in a verified, secure environment. No Data Storage: Data is never stored or made accessible to Apple when using PCC; it is used only to fulfill the specific request. Independent Verification: Independent experts are able to inspect the code running on these servers to verify Apple's privacy promises.
3
0
840
Dec ’25
videotoolbox superresolution
Hello, I'm using videotoolbox superresolution API in MACOS 26: https://developer.apple.com/documentation/videotoolbox/vtsuperresolutionscalerconfiguration/downloadconfigurationmodel(completionhandler:)?language=objc, when using swift, it's ok, when using objective-c, I get error when downloading model with downloadConfigurationModelWithCompletionHandler: [Auto] MA-auto{_failedLockContent} | failure reported by server | error:[com.apple.MobileAssetError.AutoAsset:MissingReference(6111)] [Auto] MA-auto{_failedLockContent} | failure reported by server | error:[com.apple.MobileAssetError.AutoAsset:UnderlyingError(6107)_1_com.apple.MobileAssetError.Download:47] Download completion handler called with error: The operation couldnxe2x80x99t be completed. (VTFrameProcessorErrorDomain error -19743.)
3
1
774
Nov ’25
Cannot find type ToolOutput in scope
My sample app has been working with the following code: func call(arguments: Arguments) async throws -> ToolOutput { var temp:Int switch arguments.city { case .singapore: temp = Int.random(in: 30..<40) case .china: temp = Int.random(in: 10..<30) } let content = GeneratedContent(temp) let output = ToolOutput(content) return output } However in 26 beta 5, ToolOutput no longer available, please advice what has changed.
3
0
246
Aug ’25
Getting FoundationsModel running in Simulator
I have a mac (M4, MacBook Pro) running Tahoe 26.0 beta. I am running Xcode beta. I can run code that uses the LLM in a #Preview { }. But when I try to run the same code in the simulator, I get the 'device not ready' error and I see the following in the Settings app. Is there anything I can do to get the simulator to past this point and allowing me to test on it with Apple's LLM?
3
0
367
Jul ’25
The answer of "apple" goes to guardrailViolation?
I have been using "apple" to test foundation models. I thought this is local, but today the answer changed - half way through explanation, suddenly guardrailViolation error was activated! And yesterday, all reference to "Apple II", "Apple III" now refers me to consult apple.com! Does foundation models connect to Internet for answer? Using beta 3.
3
0
172
Jul ’25
Foundation Models flags 'Six Flags Great America' as unsafe
I'm working on a to-do list app that uses SpeechTranscriber and Foundation Models framework to transcribe a user's voice into text and create to-do items based off of it. After about 30 minutes looking at my code, I couldn't figure out why I was failing to generate a to-do for "I need to go to Six Flags Great America tomorrow at 3pm." It turns out, I was consistently firing the Foundation Models's safety filter violation for unsafe content ("May contain unsafe content"). Lesson learned: consider comprehensively logging Foundation Models error states to quickly identify when safety filters are unexpectedly triggered.
3
1
508
Jul ’25
RecognizeDocumentsRequest for receipts
Hi, I'm trying to use the new RecognizeDocumentsRequest from the Vision Framework to read a receipt. It looks very promising by being able to read paragraphs, lines and detect data. So far it unfortunately seems to read every line on the receipt as a paragraph and when there is more space on one line it creates two paragraphs. Is there perhaps an Apple Engineer who knows if this is expected behaviour or if I should file a Feedback for this? Code setup: let request = RecognizeDocumentsRequest() let observations = try await request.perform(on: image) guard let document = observations.first?.document else { return } for paragraph in document.paragraphs { print(paragraph.transcript) for data in paragraph.detectedData { switch data.match.details { case .phoneNumber(let data): print("Phone: \(data)") case .postalAddress(let data): print("Postal: \(data)") case .calendarEvent(let data): print("Calendar: \(data)") case .moneyAmount(let data): print("Money: \(data)") case .measurement(let data): print("Measurement: \(data)") default: continue } } } See attached image as an example of a receipt I'd like to parse. The top 3 lines are the name, street, and postal code + city. These are all separate paragraphs. Checking on detectedData does see the street (2nd line) as PostalAddress, but not the complete address. Might that be a location thing since it's a Dutch address. And lower on the receipt it sees the block with "Pomp 1 95 Ongelood" and the things below also as separate paragraphs. First picking up the left side and after that the right side. So it's something like this: * Pomp 1 Volume Prijs € TOTAAL * BTW Netto 21.00 % 95 Ongelood 41,90 l 1.949/ 1 81.66 € 14.17 67.49
3
1
510
Nov ’25
BNNS random number generator for Double value types
I generate an array of random floats using the code shown below. However, I would like to do this with Double instead of Float. Are there any BNNS random number generators for double values, something like BNNSRandomFillUniformDouble? If not, is there a way I can convert BNNSNDArrayDescriptor from float to double? import Accelerate let n = 100_000_000 let result = Array<Float>(unsafeUninitializedCapacity: n) { buffer, initCount in var descriptor = BNNSNDArrayDescriptor(data: buffer, shape: .vector(n))! let randomGenerator = BNNSCreateRandomGenerator(BNNSRandomGeneratorMethodAES_CTR, nil) BNNSRandomFillUniformFloat(randomGenerator, &descriptor, 0, 1) initCount = n }
3
0
133
Jun ’25
Creating .mlmodel with Create ML Components
I have rewatched WWDC22 a few times , but still not getting full understanding how to get .mlmodel model file type from components . Example with banana ripeness is cool , but what need to be added to actually have output of .mlmodel , is somewhere full sample code for this type of modular project ? Code is from [https://developer.apple.com/videos/play/wwdc2022/10019) import CoreImage import CreateMLComponents struct ImageRegressor { static let trainingDataURL = URL(fileURLWithPath: "~/Desktop/bananas") static let parametersURL = URL(fileURLWithPath: "~/Desktop/parameters") static func train() async throws -> some Transformer<CIImage, Float> { let estimator = ImageFeaturePrint() .appending(LinearRegressor()) // File name example: banana-5.jpg let data = try AnnotatedFiles(labeledByNamesAt: trainingDataURL, separator: "-", index: 1, type: .image) .mapFeatures(ImageReader.read) .mapAnnotations({ Float($0)! }) let (training, validation) = data.randomSplit(by: 0.8) let transformer = try await estimator.fitted(to: training, validateOn: validation) try estimator.write(transformer, to: parametersURL) return transformer } } I have tried to run it in Mac OS command line type app, Swift-UI but most what I had as output was .pkg with "pipeline.json, parameters, optimizer.json, optimizer"
3
0
620
Mar ’25
Error when open mlpackage with XCode
Hello, I'm trying to write a model with PyTorch and convert it to CoreML. I wrote another models and that works succesfully, even the one that gave the problem is, but I can't visualize it with XCode to know where is running. The error that appear is: There was a problem decoding this Core ML document validator error: unable to open file for read Anyone knows why is this happening? Thanks a lot, Álvaro Corrochano
3
0
242
Apr ’25
Request for Agentic AI Mode (MCP Protocol) Support in Future Versions of iOS or Xcode
Hello Apple Team, Thank you for the recent Group Lab and for your continued work on advancing Xcode and developer tools. I’d like to submit a feature request: Are there any plans to introduce support for Agentic AI Mode (MCP protocol) in future versions of iOS or Xcode? As developer tools evolve toward more intelligent and context-aware environments, the integration of agentic AI capabilities could significantly enhance productivity and unlock new creative workflows. Looking forward to your consideration, and thank you again for the excellent session. Best regards
3
0
204
Jun ’25
Error in Xcode console
Lately I am getting this error. GenerativeModelsAvailability.Parameters: Initialized with invalid language code: en-GB. Expected to receive two-letter ISO 639 code. e.g. 'zh' or 'en'. Falling back to: en Does anyone know what this is and how it can be resolved. The error does not crash the app
3
1
1.2k
Nov ’25
Vision and iOS18 - Failed to create espresso context.
I'm playing with the new Vision API for iOS18, specifically with the new CalculateImageAestheticsScoresRequest API. When I try to perform the image observation request I get this error: internalError("Error Domain=NSOSStatusErrorDomain Code=-1 \"Failed to create espresso context.\" UserInfo={NSLocalizedDescription=Failed to create espresso context.}") The code is pretty straightforward: if let image = image { let request = CalculateImageAestheticsScoresRequest() Task { do { let cgImg = image.cgImage! let observations = try await request.perform(on: cgImg) let description = observations.description let score = observations.overallScore print(description) print(score) } catch { print(error) } } } I'm running it on a M2 using the simulator. Is it a bug? What's wrong?
3
1
1.6k
Sep ’25
Apple Intelligence crashed/stopped working
Hi everyone, I’m currently using macOS Version 15.3 Beta (24D5034f), and I’m encountering an issue with Apple Intelligence. The image generation tools seem to work fine, but everything else shows a message saying that it’s “not available at this time.” I’ve tried restarting my Mac and double-checked my settings, but the problem persists. Is anyone else experiencing this issue on the beta version? Are there any fixes or settings I might be overlooking? Any help or insights would be greatly appreciated! Thanks in advance!
3
1
1.2k
3d
I Need some clarifications about FoundationModels
Hello I’m experimenting with Apple’s on‑device language model via the FoundationModels framework in Xcode (using LanguageModelSession in my code). I’d like to confirm a few points: • Is the language model provided by FoundationModels designed and trained by Apple? Or is it based on an open‑source model? • Is this on‑device model available on iOS (and iPadOS), or is it limited to macOS? • When I write code in Xcode, is code completion powered by this same local model? If so, why isn’t the same model available in the left‑hand chat sidebar in Xcode (so that I can use it there instead of relying on ChatGPT)? • Can I grant this local model access to my personal data (photos, contacts, SMS, emails) so it can answer questions based on that information? If yes, what APIs, permission prompts, and privacy constraints apply? Thanks
3
0
625
Oct ’25