Apple Intelligence

RSS for tag

Apple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad, and Mac and powers incredible new features to help users communicate, work, and express themselves.

Posts under Apple Intelligence subtopic

Post

Replies

Boosts

Views

Activity

App Shortcuts Limit (10 per app) — Can This Be Increased?
Hi Apple team, When using AppShortcutsProvider, I hit the hard limit: Each app may have at most 10 App Shortcuts. This feels limiting for apps that offer multiple workflows and would benefit from deeper Siri integration. Could this cap be raised — ideally to 30 — to support broader use of AppIntents, enhance Siri automation, and unlock more system-level capabilities? AppShortcuts are a fantastic tool. Increasing the limit would make them even more powerful. Thanks!
1
0
180
Jun ’25
Xcode 26 intelligence editor modifications.
Greetings, Ive been exerimenting with the new Apple intelligence chat. I want to be able to use my custom LLM and I made that work (I can chat back and forward from the left panel with my server) but I cannot find out how to change the editor contents like chatgpt does. chatgpt is able to change the current editor and, seems like, all files in the pbx. I tried to catch the call with charles with no success. In the OpenIA platform docs it doesnt mention anything that could change the code shown. does anyone know how to achieve this? Is the apple intelliece documentation lacking this features and will it be completed soon? will this features even be open for developers?
1
0
289
Jul ’25
Does ImageRequestHandler(data:) include depth data from AVCapturePhoto?
Hi all, I'm capturing a photo using AVCapturePhotoOutput, and I've set: let photoSettings = AVCapturePhotoSettings() photoSettings.isDepthDataDeliveryEnabled = true Then I create the handler like this: let data = photo.fileDataRepresentation() let handler = try ImageRequestHandler(data: data, orientation: .right) Now I’m wondering: If depth data delivery is enabled, is it actually included and used when I pass the Data to ImageRequestHandler? Or do I need to explicitly pass the depth data using the other initializer? let handler = try ImageRequestHandler( cvPixelBuffer: photo.pixelBuffer!, depthData: photo.depthData, orientation: .right ) In short: Does ImageRequestHandler(data:) make use of embedded depth info from AVCapturePhoto.fileDataRepresentation() — or is the pixel buffer + explicit depth data required? Thanks for any clarification!
1
0
264
Jul ’25
UI Guidelines for Apple Intelligence?
Are there any guidelines for using Foundation Models To generate text for users in response to some canned queries? Should we use a special icon or text to let the user know that Apple Intelligence is generating the text? Should there be a disclaimer like, Apple Intelligence can make mistakes, please check for accuracy, etc?
1
0
668
Sep ’25
Image Playground Error: Unable to Generate Images Using externalProvider Style
I’m working on generating images using Image Playground. The code works fine for other styles but fails when using an external provider. I don’t see any other requirements mentioned in the documentation. Has anyone else encountered a similar issue? Here’s the relevant code snippet: https://developer.apple.com/documentation/imageplayground/imageplaygroundstyle/externalprovider?changes=_2 The error message is also not very helpful. It simply states that the creation failed. Note: I have enabled ChatGPT Plus, and the image generation using ChatGPT styles works fine when using the Playground app. do { let creator = try await ImageCreator() let concept = ImagePlaygroundConcept.text("Love") let images = creator.images(for: [concept], style: .externalProvider, limit: 1) for try await image in images { // Handle image break } } catch { // Handle error } I’m using the iOS 26 RC, and when I print creator.availableStyles, it doesn’t display the external Provider. [ImagePlayground.ImagePlaygroundStyle(id: "animation", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "emoji", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "illustration", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "sketch", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "messages-background", _representationInfo: nil)]
1
0
892
Sep ’25
Visual Intelligence API SemanticContentDescriptor labels are empty
I'm trying to use Apple's new Visual Intelligence API for recommending content through screenshot image search. The problem I encountered is that the SemanticContentDescriptor labels are either completely empty or super misleading, making it impossible to query for similar content on my app. Even the closest matching example was inaccurate, returning a single label ["cardigan"] for a Supreme T-Shirt. I see other apps using this API like Etsy for example, and I'm wondering if they're using the input pixel buffer to query for similar content rather than using the labels? If anyone has a similar experience or something that wasn't called out in the documentation please lmk! Thanks.
1
0
390
Oct ’25
tensorflow 2.20 broken support
Hi, testing latest tensorflow-metal plugin with tensorflow 2.20 doesn't work.. using python Python 3.12.11 (main, Jun 3 2025, 15:41:47) [Clang 17.0.0 (clang-1700.0.13.3)] on darwin simple testing shows error: import tensorflow as tf Traceback (most recent call last): File "", line 1, in File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/init.py", line 438, in _ll.load_library(_plugin_dir) File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib Reason: tried: '/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file) tf.config.experimental.list_physical_devices('GPU') Traceback (most recent call last): File "", line 1, in NameError: name 'tf' is not defined I fixed this error by copying _pywrap_tensorflow_internal.so where it's searched.. 1)mkdir /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64 2)mkdir /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/ 3)cp /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/ then fails symbol not found: Symbol not found: __ZN10tensorflow28_AttrValue_default_instance_E in libmetal_plugin.dylib full log: with import tensorflow as tf Traceback (most recent call last): File "", line 1, in File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/init.py", line 438, in _ll.load_library(_plugin_dir) File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: __ZN10tensorflow28_AttrValue_default_instance_E Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib Expected in: <2FF91C8B-0CB6-3E66-96B7-092FDF36772E> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so
1
0
684
Oct ’25
Xcode 26.1 RC ( RC1 ?) Apple Intelligence using GPT (with account or without) or Sonnet (via OpenRouter) much slower
I didn't run benchmarks before update, but it seems at least 5x slower. Of course all the LLM work is on remote servers, so is non-intuitive to me this should be happening. Had updated MacOS and Xcode to 26.1RC at the same time, so can't even say I think it is MacOS or I think it is Xcode. Before the update the progress indicator for each piece of code might seem to get stuck at the very end (and toggling between Navigators and Coding Assistant) in Xcode UI seemed to refresh the UI and confirm coding complete... but now it seems progress races to 50%, then often is stuck at 75%... well earlier than used to get stuck. And it like something is legitimately processing not just a UI glitch. I'm wondering if this is somehow tied to visual rendering of the code in the little white window? CMD-TAB into Xcode seems laggy. Xcode is pinning a CPU. Why, this is all remote LLM work? MacBook Pro 2021 M1 64GB RAM. Went from 26.01 to 26.1RC. Didn't touch any of the betas until RC1.
1
1
283
Oct ’25
Conforming an existing AppIntent to the photos domain schema
I have an image based app with albums, except in my app, albums are known as galleries. When I tried to conform my existing OpenGalleryIntent with @AssistantIntent(schema: .photos.openAlbum), I had to change my existing gallery parameter to be called target in order to fit the predefined shape of this domain. Previously, my intent was configured to display as “Open Gallery” with the description “Opens the selected Gallery” in the Shortcuts app. After conforming to the photos domain, it displays as “Open Album” with a description “Opens the Provided Album”. Shortcuts is ignoring my configured title and description now. My code builds, but with the following build warnings: Parameter argument title of a required Assistant schema intent parameter target should not be overridden Implementation of the property title of an AppIntent conforming to AssistantSchemaIntent should not be overridden Implementation of the property description of an AppIntent conforming to AssistantSchemaIntent should not be overridden Is my only option to change the concept of a Gallery inside of my app into an Album? I don't want to do this... Conceptually, my app aligns well with this domain does, but I didn't consider that conforming to the shape of an AI schema intent would also dictate exactly how it's presented to the user. FB16283840
2
2
817
Jan ’25
AppIntentsSampleApp Failed to refresh AppShortcut parameters
I've been struggling and Siri support to an application. I have developed it kept getting this error when I run it on MacOS: Failed to refresh AppShortcut parameters with error: Error Domain=Foundation._GenericObjCError Code=0 "(null)" So I found AppIntentsSampleApp and downloaded and buil it and I get a similar, but larger, error: Failed to refresh AppShortcut parameters with error: Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.private.xpc.launchd.app-server AND originator doesn't have entitlement com.apple.assertiond.system-shell AND originator doesn't have entitlement com.apple.runningboard.launchprocess)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.private.xpc.launchd.app-server AND originator doesn't have entitlement com.apple.assertiond.system-shell AND j And it goes on and on. What am I missing? I'm using Xcode 16. I don't see an option to add a Siri framework. I have tried adding both the intent and tap, intent frameworks, which does not seem to make a difference.
2
0
150
Apr ’25
VNDetectTextRectanglesRequest not detecting text rectangles (includes image)
Hi everyone, I'm trying to use VNDetectTextRectanglesRequest to detect text rectangles in an image. Here's my current code: guard let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil) else { return } let textDetectionRequest = VNDetectTextRectanglesRequest { request, error in if let error = error { print("Text detection error: \(error)") return } guard let observations = request.results as? [VNTextObservation] else { print("No text rectangles detected.") return } print("Detected \(observations.count) text rectangles.") for observation in observations { print(observation.boundingBox) } } textDetectionRequest.revision = VNDetectTextRectanglesRequestRevision1 textDetectionRequest.reportCharacterBoxes = true let handler = VNImageRequestHandler(cgImage: cgImage, orientation: .up, options: [:]) do { try handler.perform([textDetectionRequest]) } catch { print("Vision request error: \(error)") } The request completes without error, but no text rectangles are detected — the observations array is empty (count = 0). Here's a sample image I'm testing with: I expected VNTextObservation results, but I'm not getting any. Is there something I'm missing in how this API works? Or could it be a limitation of this request or revision? Thanks for any help!
2
0
147
May ’25