Apple Intelligence

RSS for tag

Apple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad, and Mac and powers incredible new features to help users communicate, work, and express themselves.

Posts under Apple Intelligence subtopic

Post

Replies

Boosts

Views

Activity

Accessing Apple Intelligence APIs: Custom Prompt Support and Inference Capabilities
Hello Apple Developer Community, I'm exploring the integration of Apple Intelligence features into my mobile application and have a couple of questions regarding the current and upcoming API capabilities: Custom Prompt Support: Is there a way to pass custom prompts to Apple Intelligence to generate specific inferences? For instance, can we provide a unique prompt to the Writing Tools or Image Playground APIs to obtain tailored outputs? Direct Inference Capabilities: Beyond the predefined functionalities like text rewriting or image generation, does Apple Intelligence offer APIs that allow for more generalized inference tasks based on custom inputs? I understand that Apple has provided APIs such as Writing Tools, Image Playground, and Genmoji. However, I'm interested in understanding the extent of customization and flexibility these APIs offer, especially concerning custom prompts and generalized inference. Additionally, are there any plans or timelines for expanding these capabilities, perhaps with the introduction of new SDKs or frameworks that allow deeper integration and customization? Any insights, documentation links, or experiences shared would be greatly appreciated. Thank you in advance for your assistance!
3
0
252
Jun ’25
Ways I can leverage AI when the user asks Siri, "What does this word mean"
I'm the creator of an app that helps users learn Arabic. Inside of the app users can save words, engage in lessons specific to certain grammar concepts etc. I'm looking for a way for Siri to 'suggest' my app when the user asks to define any Arabic words. There are other questions that I would like for Siri to suggest my app for, but I figure that's a good start. What framework am I looking for here? I think AppItents? I remember I played with it for a bit last year but didn't get far. Any suggestions would be great. Would the new Foundations model be any help here?
2
0
89
Jun ’25
is it possible to let siri monitor phone calls, and notify me when a certain trigger happens?
the specific context is that i would like to build an agent that monitors my phone call (with a customer support for example), and simiply identify whether or not im still put on hold, and notify me when im not. currently after reading the doc, i dont think its possible yet, but im so annoyed by the customer support calls that im willing to go the distance and see if theres any way.
0
0
105
Jun ’25
Will Apple Intelligence Support Third-Party LLMs or Custom AI Agent Integrations?
Hi everyone, I’m an AI engineer working on autonomous AI agents and exploring ways to integrate them into the Apple ecosystem, especially via Siri and Apple Intelligence. I was impressed by Apple’s integration of ChatGPT and its privacy-first design, but I’m curious to know: • Are there plans to support third-party LLMs? • Could Siri or Apple Intelligence call external AI agents or allow extensions to plug in alternative models for reasoning, scheduling, or proactive suggestions? I’m particularly interested in building event-driven, voice-triggered workflows where Apple Intelligence could act as a front-end for more complex autonomous systems (possibly local or cloud-based). This kind of extensibility would open up incredible opportunities for personalized, privacy-friendly use cases — while aligning with Apple’s system architecture. Is anything like this on the roadmap? Or is there a suggested way to prototype such integrations today? Thanks in advance for any thoughts or pointers!
4
0
408
May ’25
lldb issues with Vision
HI, I've been modifying the Camera sample app found here: https://developer.apple.com/tutorials/sample-apps/capturingphotos-camerapreview ... in the processpreview images, I am calling in to the Vision APis to either detect a person or object, then I'm using the segmentation mask to extract the person and composite them onto a different background with some other filters. I am using coreimage to filter the CIImages, and converting and displaying as a SwiftUI Image. When running on my IPhone, it works fine. When running on my Iphone with the debugger, it crashes within a few seconds... Attached is a screenshot. At the top is an EXC_BAD_ACCESS in libRPAC.dylib`std::__1::__hash_table<std::__1::__hash_value_type<long, qos_info_t>, std::__1::__unordered_map_hasher<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::hash, std::__1::equal_to, true>, std::__1::__unordered_map_equal<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::equal_to, std::__1::hash, true>, std::__1::allocator<std::__1::__hash_value_type<long, qos_info_t>>>::__emplace_unique_key_args<long, std::__1::piecewise_construct_t const&, std::__1::tuple<long const&>, std::__1::tuple<>>: This was working fine a couple of days ago.. Not sure why it's popping up now. Am I correct in interpreting this as an LLDB issue? How do I fix it?
3
1
86
May ’25
VNDetectTextRectanglesRequest not detecting text rectangles (includes image)
Hi everyone, I'm trying to use VNDetectTextRectanglesRequest to detect text rectangles in an image. Here's my current code: guard let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil) else { return } let textDetectionRequest = VNDetectTextRectanglesRequest { request, error in if let error = error { print("Text detection error: \(error)") return } guard let observations = request.results as? [VNTextObservation] else { print("No text rectangles detected.") return } print("Detected \(observations.count) text rectangles.") for observation in observations { print(observation.boundingBox) } } textDetectionRequest.revision = VNDetectTextRectanglesRequestRevision1 textDetectionRequest.reportCharacterBoxes = true let handler = VNImageRequestHandler(cgImage: cgImage, orientation: .up, options: [:]) do { try handler.perform([textDetectionRequest]) } catch { print("Vision request error: \(error)") } The request completes without error, but no text rectangles are detected — the observations array is empty (count = 0). Here's a sample image I'm testing with: I expected VNTextObservation results, but I'm not getting any. Is there something I'm missing in how this API works? Or could it be a limitation of this request or revision? Thanks for any help!
2
0
112
May ’25
Threading issues when using debugger
Hi, I am modifying the sample camera app that is here: https://developer.apple.com/tutorials/sample-apps/capturingphotos-camerapreview ... In the processPreviewImages, I am using the Vision APIs to generate a segmentation mask for a person/object, then compositing that person onto a different background (with some other filtering). The filtering and compositing is done via CoreImage. At the end, I convert the CIImage to a CGImage then to a SwiftUI Image. When I run it on my iPhone, it works fine, and has not crashed. When I run it on the iPhone with the debugger, it crashes within a few seconds with: EXC_BAD_ACCESS in libRPAC.dylib`std::__1::__hash_table<std::__1::__hash_value_type<long, qos_info_t>, std::__1::__unordered_map_hasher<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::hash, std::__1::equal_to, true>, std::__1::__unordered_map_equal<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::equal_to, std::__1::hash, true>, std::__1::allocator<std::__1::__hash_value_type<long, qos_info_t>>>::__emplace_unique_key_args<long, std::__1::piecewise_construct_t const&, std::__1::tuple<long const&>, std::__1::tuple<>>: It had previously been working fine with the debugger, so I'm not sure what has changed. Is there a difference in how the Vision APIs are executed if the debugger is attached vs. not?
0
0
56
Apr ’25
ImagePlayground API not working on Xcode Simulator Devices
Hi! I'm trying to use the ImagePlayground API in SwiftUI with the .imagePlaygroundSheet modifier. However, when the sheet is shown (in the preview or in the simulator) it displays the following message: "Image Playground is not available. Image Playground is not available on this iPhone.". I'm using an iPhone 16 Pro with iOS 18.3.1 in the Xcode (16.2) Simulator. Anyone else having this problem? How can I fix it?
1
0
121
Apr ’25
AppIntentsSampleApp Failed to refresh AppShortcut parameters
I've been struggling and Siri support to an application. I have developed it kept getting this error when I run it on MacOS: Failed to refresh AppShortcut parameters with error: Error Domain=Foundation._GenericObjCError Code=0 "(null)" So I found AppIntentsSampleApp and downloaded and buil it and I get a similar, but larger, error: Failed to refresh AppShortcut parameters with error: Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.private.xpc.launchd.app-server AND originator doesn't have entitlement com.apple.assertiond.system-shell AND originator doesn't have entitlement com.apple.runningboard.launchprocess)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.private.xpc.launchd.app-server AND originator doesn't have entitlement com.apple.assertiond.system-shell AND j And it goes on and on. What am I missing? I'm using Xcode 16. I don't see an option to add a Siri framework. I have tried adding both the intent and tap, intent frameworks, which does not seem to make a difference.
2
0
92
Apr ’25
Unified Use Case Mail Categories & Spam
Hi Apple product owners. I am missing a unified concept which might be derived from the use cases for mail categories and mail spam for the app "Mail" on Mac. I need a recommendation on how to use categories in combination with the spam filter to get most out of it. So I was looking for the use cases for the 2 functionality areas in order to figure out how to organise my mails by using as much automation as possible before I start creating intelligent folders in addition. What can you recommend where I get this information from? I don't want to guess or read a lot of forum contributions which are based on guesses.
1
0
48
Apr ’25
Siri 2.0 (suggests and future updates)
Hey dear developers! This post should be available for the future Siri updates and improvements but also for wishes in this forum so that everyone can share their opinion and idea please stay friendly. have fun! I had already thought about developing a demo app to demonstrate my idea for a better Siri. My change of many: Wish Update: Siri's language recognition capabilities have been significantly enhanced. Instead of manually setting the language, Siri can now automatically recognize the language you intend to use, making language switching much more efficient. Simply speak the language you want to communicate in, and Siri will automatically recognize it and respond accordingly. Whether you speak English, German, or Japanese, Siri will respond in the language you choose.
0
1
136
Mar ’25
Writing tools options
Hi team, We have implemented a writing tool inside a WebView that allows users to type content in a textarea. When the "Show Writing Tools" button is clicked, an AI-powered editor opens. After clicking the "Rewrite" button, the AI modifies the text. However, when clicking the "Replace" button, the rewritten text does not update the original textarea. Kindly check and help me showButton.addTarget(self, action: #selector(showWritingTools(_:)), for: .touchUpInside) @available(iOS 18.2, *) optional func showWritingTools(_ sender: Any) Note: same cases working in TextView pfa
0
0
128
Mar ’25
Get NFC Data Identity card
Hello, I have to create an app in Swift that it scan NFC Identity card. It extract data and convert it to human readable data. I do it with below code import CoreNFC class NFCIdentityCardReader: NSObject , NFCTagReaderSessionDelegate { func tagReaderSessionDidBecomeActive(_ session: NFCTagReaderSession) { print("\(session.description)") } func tagReaderSession(_ session: NFCTagReaderSession, didInvalidateWithError error: any Error) { print("NFC Error: \(error.localizedDescription)") } var session: NFCTagReaderSession? func beginScanning() { guard NFCTagReaderSession.readingAvailable else { print("NFC is not supported on this device") return } session = NFCTagReaderSession(pollingOption: .iso14443, delegate: self, queue: nil) session?.alertMessage = "Hold your NFC identity card near the device." session?.begin() } func tagReaderSession(_ session: NFCTagReaderSession, didDetect tags: [NFCTag]) { guard let tag = tags.first else { session.invalidate(errorMessage: "No tag detected") return } session.connect(to: tag) { (error) in if let error = error { session.invalidate(errorMessage: "Connection error: \(error.localizedDescription)") return } switch tag { case .miFare(let miFareTag): self.readMiFareTag(miFareTag, session: session) case .iso7816(let iso7816Tag): self.readISO7816Tag(iso7816Tag, session: session) case .iso15693, .feliCa: session.invalidate(errorMessage: "Unsupported tag type") @unknown default: session.invalidate(errorMessage: "Unknown tag type") } } } private func readMiFareTag(_ tag: NFCMiFareTag, session: NFCTagReaderSession) { // Read from MiFare card, assuming it's formatted as an identity card let command: [UInt8] = [0x30, 0x04] // Example: Read command for block 4 let requestData = Data(command) tag.sendMiFareCommand(commandPacket: requestData) { (response, error) in if let error = error { session.invalidate(errorMessage: "Error reading MiFare: \(error.localizedDescription)") return } let readableData = String(data: response, encoding: .utf8) ?? response.map { String(format: "%02X", $0) }.joined() session.alertMessage = "ID Card Data: \(readableData)" session.invalidate() } } private func readISO7816Tag(_ tag: NFCISO7816Tag, session: NFCTagReaderSession) { let selectAppCommand = NFCISO7816APDU(instructionClass: 0x00, instructionCode: 0xA4, p1Parameter: 0x04, p2Parameter: 0x00, data: Data([0xA0, 0x00, 0x00, 0x02, 0x47, 0x10, 0x01]), expectedResponseLength: -1) tag.sendCommand(apdu: selectAppCommand) { (response, sw1, sw2, error) in if let error = error { session.invalidate(errorMessage: "Error reading ISO7816: \(error.localizedDescription)") return } let readableData = response.map { String(format: "%02X", $0) }.joined() session.alertMessage = "ID Card Data: \(readableData)" session.invalidate() } } } But I got null. I think that these data are encrypted. How can I convert them to readable data without MRZ, is it possible ? I need to get personal informations from Identity card via Core NFC. Thanks in advance. Best regards
0
0
114
Mar ’25
Apple Intelligence stuck on "preparing" for 6 days.
On the October 10/28 release day of Apple Intelligence I opted in. My iPhone and iPad immediately went to "waitlist" and within 2 to 3 hours were ready to initialize Apple Intelligence. My MacBook Pro 14" with M3 Pro processor and 18 GB or RAM has been stuck on "preparing" since release day (6 days now). I've tried numerous workarounds that I found on forums as well as talking to Apple support, who basically had me repeat the workarounds that I found on forums. I've tried changing region to an area that does not have Apple Intelligence and then back to the US, I've changed Siri language to an unsupported one and back to a supported one, and I have tried disabling background/startup Apps, I've disabled and reenabled Siri. Oh, I've restarted a bunch and let the Mac alone for hours at a time. I've noticed that my selected Siri voice seems to not download. Finally, after several chats and calls with Apple support, I was told that it's Beta software, they can't help me, and I should try the developer forums.... so here I am. Any advice?
9
1
2.9k
Feb ’25
How does the extract method from ImagePlaygroundConcept work?
I’m building an app that generates images based on text input from a specific text field. However, I’m encountering a problem: For short prompts like "a cat and a dog", the entire string is sent to the Image Playground, even when I use the extracted method. For longer inputs, the behavior is inconsistent. Sometimes it extracts keywords correctly, but other times it doesn’t extract anything at all. Since my app relies on generating images based on the extracted keywords, this inconsistency negatively impacts the user experience in my app. How can I make sure that keywords are always extracted from the input string? Button("Generate", systemImage: "apple.intelligence") { isPresented = true } .imagePlaygroundSheet(isPresented: $isPresented, concepts: [ImagePlaygroundConcept.extracted(from: text, title: textTitle)]) { url in imageURL = url }
1
0
511
Feb ’25
Conforming an existing AppIntent to the photos domain schema
I have an image based app with albums, except in my app, albums are known as galleries. When I tried to conform my existing OpenGalleryIntent with @AssistantIntent(schema: .photos.openAlbum), I had to change my existing gallery parameter to be called target in order to fit the predefined shape of this domain. Previously, my intent was configured to display as “Open Gallery” with the description “Opens the selected Gallery” in the Shortcuts app. After conforming to the photos domain, it displays as “Open Album” with a description “Opens the Provided Album”. Shortcuts is ignoring my configured title and description now. My code builds, but with the following build warnings: Parameter argument title of a required Assistant schema intent parameter target should not be overridden Implementation of the property title of an AppIntent conforming to AssistantSchemaIntent should not be overridden Implementation of the property description of an AppIntent conforming to AssistantSchemaIntent should not be overridden Is my only option to change the concept of a Gallery inside of my app into an Album? I don't want to do this... Conceptually, my app aligns well with this domain does, but I didn't consider that conforming to the shape of an AI schema intent would also dictate exactly how it's presented to the user. FB16283840
2
2
741
Jan ’25