I'm attempting to run a basic Foundation Model prototype in Xcode 26, but I'm getting the error below, using the iPhone 16 simulator with iOS 26. Should these models be working yet? Do I need to be running macOS 26 for these to work? (I hope that's not it)
Error:
Passing along Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides} in response to ExecuteRequest
Playground to reproduce:
#Playground {
let session = LanguageModelSession()
do {
let response = try await session.respond(to: "What's happening?")
} catch {
let error = error
}
}
Apple Intelligence
RSS for tagApple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad, and Mac and powers incredible new features to help users communicate, work, and express themselves.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
the specific context is that i would like to build an agent that monitors my phone call (with a customer support for example), and simiply identify whether or not im still put on hold, and notify me when im not.
currently after reading the doc, i dont think its possible yet, but im so annoyed by the customer support calls that im willing to go the distance and see if theres any way.
Hi everyone,
I’m an AI engineer working on autonomous AI agents and exploring ways to integrate them into the Apple ecosystem, especially via Siri and Apple Intelligence.
I was impressed by Apple’s integration of ChatGPT and its privacy-first design, but I’m curious to know:
• Are there plans to support third-party LLMs?
• Could Siri or Apple Intelligence call external AI agents or allow extensions to plug in alternative models for reasoning, scheduling, or proactive suggestions?
I’m particularly interested in building event-driven, voice-triggered workflows where Apple Intelligence could act as a front-end for more complex autonomous systems (possibly local or cloud-based).
This kind of extensibility would open up incredible opportunities for personalized, privacy-friendly use cases — while aligning with Apple’s system architecture.
Is anything like this on the roadmap? Or is there a suggested way to prototype such integrations today?
Thanks in advance for any thoughts or pointers!
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
SiriKit
Machine Learning
Apple Intelligence
Hello Apple Developer Community,
I'm exploring the integration of Apple Intelligence features into my mobile application and have a couple of questions regarding the current and upcoming API capabilities:
Custom Prompt Support: Is there a way to pass custom prompts to Apple Intelligence to generate specific inferences? For instance, can we provide a unique prompt to the Writing Tools or Image Playground APIs to obtain tailored outputs?
Direct Inference Capabilities: Beyond the predefined functionalities like text rewriting or image generation, does Apple Intelligence offer APIs that allow for more generalized inference tasks based on custom inputs?
I understand that Apple has provided APIs such as Writing Tools, Image Playground, and Genmoji. However, I'm interested in understanding the extent of customization and flexibility these APIs offer, especially concerning custom prompts and generalized inference.
Additionally, are there any plans or timelines for expanding these capabilities, perhaps with the introduction of new SDKs or frameworks that allow deeper integration and customization?
Any insights, documentation links, or experiences shared would be greatly appreciated.
Thank you in advance for your assistance!
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Hi everyone,
I'm trying to use VNDetectTextRectanglesRequest to detect text rectangles in an image. Here's my current code:
guard let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil) else {
return
}
let textDetectionRequest = VNDetectTextRectanglesRequest { request, error in
if let error = error {
print("Text detection error: \(error)")
return
}
guard let observations = request.results as? [VNTextObservation] else {
print("No text rectangles detected.")
return
}
print("Detected \(observations.count) text rectangles.")
for observation in observations {
print(observation.boundingBox)
}
}
textDetectionRequest.revision = VNDetectTextRectanglesRequestRevision1
textDetectionRequest.reportCharacterBoxes = true
let handler = VNImageRequestHandler(cgImage: cgImage, orientation: .up, options: [:])
do {
try handler.perform([textDetectionRequest])
} catch {
print("Vision request error: \(error)")
}
The request completes without error, but no text rectangles are detected — the observations array is empty (count = 0). Here's a sample image I'm testing with:
I expected VNTextObservation results, but I'm not getting any. Is there something I'm missing in how this API works? Or could it be a limitation of this request or revision?
Thanks for any help!
I’m sure someone though about it already. But let’s have ecosystem, where Apple Intelligence uses your most capable (Apple) hardware at first and the cloud service as second.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
HI,
I've been modifying the Camera sample app found here: https://developer.apple.com/tutorials/sample-apps/capturingphotos-camerapreview ... in the processpreview images, I am calling in to the Vision APis to either detect a person or object, then I'm using the segmentation mask to extract the person and composite them onto a different background with some other filters. I am using coreimage to filter the CIImages, and converting and displaying as a SwiftUI Image. When running on my IPhone, it works fine. When running on my Iphone with the debugger, it crashes within a few seconds... Attached is a screenshot. At the top is an EXC_BAD_ACCESS in libRPAC.dylib`std::__1::__hash_table<std::__1::__hash_value_type<long, qos_info_t>, std::__1::__unordered_map_hasher<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::hash, std::__1::equal_to, true>, std::__1::__unordered_map_equal<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::equal_to, std::__1::hash, true>, std::__1::allocator<std::__1::__hash_value_type<long, qos_info_t>>>::__emplace_unique_key_args<long, std::__1::piecewise_construct_t const&, std::__1::tuple<long const&>, std::__1::tuple<>>:
This was working fine a couple of days ago.. Not sure why it's popping up now. Am I correct in interpreting this as an LLDB issue? How do I fix it?
Hi,
I am modifying the sample camera app that is here: https://developer.apple.com/tutorials/sample-apps/capturingphotos-camerapreview ... In the processPreviewImages, I am using the Vision APIs to generate a segmentation mask for a person/object, then compositing that person onto a different background (with some other filtering). The filtering and compositing is done via CoreImage. At the end, I convert the CIImage to a CGImage then to a SwiftUI Image. When I run it on my iPhone, it works fine, and has not crashed. When I run it on the iPhone with the debugger, it crashes within a few seconds with:
EXC_BAD_ACCESS in libRPAC.dylib`std::__1::__hash_table<std::__1::__hash_value_type<long, qos_info_t>, std::__1::__unordered_map_hasher<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::hash, std::__1::equal_to, true>, std::__1::__unordered_map_equal<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::equal_to, std::__1::hash, true>, std::__1::allocator<std::__1::__hash_value_type<long, qos_info_t>>>::__emplace_unique_key_args<long, std::__1::piecewise_construct_t const&, std::__1::tuple<long const&>, std::__1::tuple<>>:
It had previously been working fine with the debugger, so I'm not sure what has changed. Is there a difference in how the Vision APIs are executed if the debugger is attached vs. not?
I've been struggling and Siri support to an application. I have developed it kept getting this error when I run it on MacOS:
Failed to refresh AppShortcut parameters with error: Error Domain=Foundation._GenericObjCError Code=0 "(null)"
So I found AppIntentsSampleApp and downloaded and buil it and I get a similar, but larger, error:
Failed to refresh AppShortcut parameters with error: Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.private.xpc.launchd.app-server AND originator doesn't have entitlement com.apple.assertiond.system-shell AND originator doesn't have entitlement com.apple.runningboard.launchprocess)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.private.xpc.launchd.app-server AND originator doesn't have entitlement com.apple.assertiond.system-shell AND j
And it goes on and on.
What am I missing? I'm using Xcode 16. I don't see an option to add a Siri framework. I have tried adding both the intent and tap, intent frameworks, which does not seem to make a difference.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Hi! I'm trying to use the ImagePlayground API in SwiftUI with the .imagePlaygroundSheet modifier. However, when the sheet is shown (in the preview or in the simulator) it displays the following message: "Image Playground is not available. Image Playground is not available on this iPhone.".
I'm using an iPhone 16 Pro with iOS 18.3.1 in the Xcode (16.2) Simulator.
Anyone else having this problem? How can I fix it?
Hi Apple product owners.
I am missing a unified concept which might be derived from the use cases for mail categories and mail spam for the app "Mail" on Mac.
I need a recommendation on how to use categories in combination with the spam filter to get most out of it.
So I was looking for the use cases for the 2 functionality areas in order to figure out how to organise my mails by using as much automation as possible before I start creating intelligent folders in addition.
What can you recommend where I get this information from? I don't want to guess or read a lot of forum contributions which are based on guesses.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Hey dear developers!
This post should be available for the future Siri updates and improvements but also for wishes in this forum so that everyone can share their opinion and idea please stay friendly. have fun! I had already thought about developing a demo app to demonstrate my idea for a better Siri.
My change of many:
Wish Update: Siri's language recognition capabilities have been significantly enhanced. Instead of manually setting the language, Siri can now automatically recognize the language you intend to use, making language switching much more efficient. Simply speak the language you want to communicate in, and Siri will automatically recognize it and respond accordingly. Whether you speak English, German, or Japanese, Siri will respond in the language you choose.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
iPhone
Siri Event Suggestions Markup
Siri and Voice
Apple Intelligence
Hi team,
We have implemented a writing tool inside a WebView that allows users to type content in a textarea. When the "Show Writing Tools" button is clicked, an AI-powered editor opens. After clicking the "Rewrite" button, the AI modifies the text. However, when clicking the "Replace" button, the rewritten text does not update the original textarea.
Kindly check and help me
showButton.addTarget(self, action: #selector(showWritingTools(_:)), for: .touchUpInside)
@available(iOS 18.2, *)
optional func showWritingTools(_ sender: Any)
Note:
same cases working in TextView
pfa
Hello, I have to create an app in Swift that it scan NFC Identity card. It extract data and convert it to human readable data. I do it with below code
import CoreNFC
class NFCIdentityCardReader: NSObject , NFCTagReaderSessionDelegate {
func tagReaderSessionDidBecomeActive(_ session: NFCTagReaderSession) {
print("\(session.description)")
}
func tagReaderSession(_ session: NFCTagReaderSession, didInvalidateWithError error: any Error) {
print("NFC Error: \(error.localizedDescription)")
}
var session: NFCTagReaderSession?
func beginScanning() {
guard NFCTagReaderSession.readingAvailable else {
print("NFC is not supported on this device")
return
}
session = NFCTagReaderSession(pollingOption: .iso14443, delegate: self, queue: nil)
session?.alertMessage = "Hold your NFC identity card near the device."
session?.begin()
}
func tagReaderSession(_ session: NFCTagReaderSession, didDetect tags: [NFCTag]) {
guard let tag = tags.first else {
session.invalidate(errorMessage: "No tag detected")
return
}
session.connect(to: tag) { (error) in
if let error = error {
session.invalidate(errorMessage: "Connection error: \(error.localizedDescription)")
return
}
switch tag {
case .miFare(let miFareTag):
self.readMiFareTag(miFareTag, session: session)
case .iso7816(let iso7816Tag):
self.readISO7816Tag(iso7816Tag, session: session)
case .iso15693, .feliCa:
session.invalidate(errorMessage: "Unsupported tag type")
@unknown default:
session.invalidate(errorMessage: "Unknown tag type")
}
}
}
private func readMiFareTag(_ tag: NFCMiFareTag, session: NFCTagReaderSession) {
// Read from MiFare card, assuming it's formatted as an identity card
let command: [UInt8] = [0x30, 0x04] // Example: Read command for block 4
let requestData = Data(command)
tag.sendMiFareCommand(commandPacket: requestData) { (response, error) in
if let error = error {
session.invalidate(errorMessage: "Error reading MiFare: \(error.localizedDescription)")
return
}
let readableData = String(data: response, encoding: .utf8) ?? response.map { String(format: "%02X", $0) }.joined()
session.alertMessage = "ID Card Data: \(readableData)"
session.invalidate()
}
}
private func readISO7816Tag(_ tag: NFCISO7816Tag, session: NFCTagReaderSession) {
let selectAppCommand = NFCISO7816APDU(instructionClass: 0x00, instructionCode: 0xA4, p1Parameter: 0x04, p2Parameter: 0x00, data: Data([0xA0, 0x00, 0x00, 0x02, 0x47, 0x10, 0x01]), expectedResponseLength: -1)
tag.sendCommand(apdu: selectAppCommand) { (response, sw1, sw2, error) in
if let error = error {
session.invalidate(errorMessage: "Error reading ISO7816: \(error.localizedDescription)")
return
}
let readableData = response.map { String(format: "%02X", $0) }.joined()
session.alertMessage = "ID Card Data: \(readableData)"
session.invalidate()
}
}
}
But I got null. I think that these data are encrypted. How can I convert them to readable data without MRZ, is it possible ?
I need to get personal informations from Identity card via Core NFC.
Thanks in advance.
Best regards
May i know the bundle identifier for apple intelligence?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I’m building an app that generates images based on text input from a specific text field. However, I’m encountering a problem:
For short prompts like "a cat and a dog", the entire string is sent to the Image Playground, even when I use the extracted method. For longer inputs, the behavior is inconsistent. Sometimes it extracts keywords correctly, but other times it doesn’t extract anything at all.
Since my app relies on generating images based on the extracted keywords, this inconsistency negatively impacts the user experience in my app. How can I make sure that keywords are always extracted from the input string?
Button("Generate", systemImage: "apple.intelligence") {
isPresented = true
}
.imagePlaygroundSheet(isPresented: $isPresented, concepts: [ImagePlaygroundConcept.extracted(from: text, title: textTitle)]) { url in
imageURL = url
}
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
In this thread, I asked about adding parameters to App Shortcuts. The conclusion that I've drawn so far is that for App Shortcuts, there cannot be any parameters in the prompt, otherwise the system cannot find the AppShortcutsProvider. While this is fine for Shortcuts and non-voice interaction, I'd like to find a way to add parameters to the prompt. Here is the scenario:
My app controls a device that displays some content on "pages." The pages are defined in an AppEnum, which I use for Shortcuts integration via App Intents. The App Intent functions as expected, and is able to change the page based on the user selection within Shortcuts (or prompted if using the App Shortcut). What I'd like to do is allow the user to be able to say "Siri, open with ."
So far, The closest I've come to understanding how this works is through the .intentsdefinition file you can create (and SiriKit in general), however the part that really confused me there is a button in the File Editor that says "Convert to App Intent." To me, this means that I should be able to use the app intent I've already authored and hook that into Siri, rather than making an entirely new function/code-block that does exactly the same thing. Ideally, that's what I want to do.
What's the right way to define this behavior?
p.s. If I had to pick an intent schema in the context of AssistantSchemas, I'd say it's closest to the "Open File" one, if that helps. I'd ultimately like to make the "pages" user-customizable so in the long run, that would be what I'd do.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
Siri and Voice
SiriKit
App Intents
Apple Intelligence
I have an image based app with albums, except in my app, albums are known as galleries.
When I tried to conform my existing OpenGalleryIntent with @AssistantIntent(schema: .photos.openAlbum), I had to change my existing gallery parameter to be called target in order to fit the predefined shape of this domain.
Previously, my intent was configured to display as “Open Gallery” with the description “Opens the selected Gallery” in the Shortcuts app. After conforming to the photos domain, it displays as “Open Album” with a description “Opens the Provided Album”.
Shortcuts is ignoring my configured title and description now. My code builds, but with the following build warnings:
Parameter argument title of a required Assistant schema intent parameter target should not be overridden
Implementation of the property title of an AppIntent conforming to AssistantSchemaIntent should not be overridden
Implementation of the property description of an AppIntent conforming to AssistantSchemaIntent should not be overridden
Is my only option to change the concept of a Gallery inside of my app into an Album? I don't want to do this... Conceptually, my app aligns well with this domain does, but I didn't consider that conforming to the shape of an AI schema intent would also dictate exactly how it's presented to the user.
FB16283840
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
Siri and Voice
Shortcuts
App Intents
Apple Intelligence
Hi,
I want to develop an app which makes use of Image Playground.
However, I am located in Europe which makes it impossible for me as Image Playground is not available for me. Even if I would like to distribute the app in the US.
Nor the simulator, nor a physical device will always return that support for ImagePlayground is not supported
(@Environment(.supportsImagePlayground) private var supportsImagePlayground)
How to set my environment such that I can test the feature in my iOS application
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I live in EU, Ireland, And I don’t have access to apple intelligence. I have ios18 running on iPhone XR, but please make apple intelligence available on EU
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence