Is the face and body detection service in the Vision framework a local model or a cloud model? Is there a performance report?
https://developer.apple.com/documentation/vision
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi there,
I have a custom keypoint detection model and want to use it via vision's CoremlRequest API. Here's some complication for input and output:
For input My model expect 512x512 a image. Which would be resized and padded from a 1920x1080 frame. I use the .scaleToFit option, but can I also specify the color used for padding?
For output:
My model output a CoreMLFeatureValueObservation, can I have it output in a format vision recognizes? such as joints/keypoints
If my model is able to output in a format vision recognizes, would it take care to restoring the coordinates back to the original frame? (undo the padding) If not, how do I restore it from .scaletofit option?
Best,
I've downloaded the Xcode-beta and run the sample project "FoundationModelsTripPlanner" but I got this error when trying generate the response.
InferenceError::inferenceFailed::Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.modelcatalog" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.modelcatalog}
Device: M1 Pro
Question:
Is it because M1 not supporting this feature?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi Apple team,
When using AppShortcutsProvider, I hit the hard limit:
Each app may have at most 10 App Shortcuts.
This feels limiting for apps that offer multiple workflows and would benefit from deeper Siri integration.
Could this cap be raised — ideally to 30 — to support broader use of AppIntents, enhance Siri automation, and unlock more system-level capabilities?
AppShortcuts are a fantastic tool. Increasing the limit would make them even more powerful.
Thanks!
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
Shortcuts
App Intents
Apple Intelligence
I get the following dyld error on an iPad Pro with Xcode 26 beta 4:
Symbol not found: _$s16FoundationModels20LanguageModelSessionC7prewarm12promptPrefixyAA6PromptVSg_tF
Any advice?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello
It seems the model Content Tagging doesn't obey when I define the type of tag I wish in the instructions parameters, always the output are the main topics.
The unique form to get other type of tags like emotions is using Generable + Guided types. The documentation says it is recommended but not mandatory the use instructions.
Maybe I'm setting wrongly the instructions but take a look in the attached snapshot. I copied the definition of tagging emotions from the official documentation. The upper example is employing generable and it works but in the example at the botton I set like instruction the same description of emotion and it doesn't work. I tried with other statements with more or less verbose and never output emotions.
Could you provide a state using instruction where it works? Current version of model isn't working with instruction?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Is there an API that allows iOS app developers to leverage Apple Foundation Models to authorize a user's Apple Intelligence extension, chatGPT login account?
I'm trying to provide a real-time question feature for chatGPT, a logged-in extension account, while leveraging Apple Intelligence's LLM. Is there an API that also affects the extension login account?
Good morning all has anyone encountered the issue of Siri returning back to her original user interface on IOS-26? I’m trying to figure out the cause. I’ve sent feedback via the feedback app. Just seeing if anyone else has the same issue.
Hi everyone,
I am using Xcode 16.4 in MacOS Sequoia 15.5 with Apple Intelligence turned on.
The following code gives the error message in the title:
import NaturalLanguage
@available(iOS 18.0, *)
func testSystemModel() {
let model = SystemLanguageModel.default
print(model)
}
What am I missing?
Hi! I'm trying to use the ImagePlayground API in SwiftUI with the .imagePlaygroundSheet modifier. However, when the sheet is shown (in the preview or in the simulator) it displays the following message: "Image Playground is not available. Image Playground is not available on this iPhone.".
I'm using an iPhone 16 Pro with iOS 18.3.1 in the Xcode (16.2) Simulator.
Anyone else having this problem? How can I fix it?
Hi,
I'm testing DockKit with a very simple setup:
I use VNDetectFaceRectanglesRequest to detect a face and then call dockAccessory.track(...) using the detected bounding box.
The stand is correctly docked (state == .docked) and dockAccessory is valid.
I'm calling .track(...) with a single observation and valid CameraInformation (including size, device, orientation, etc.). No errors are thrown.
To monitor this, I added a logging utility – track(...) is being called 10–30 times per second, as recommended in the documentation.
However: the stand does not move at all.
There is no visible reaction to the tracking calls.
Is there anything I'm missing or doing wrong?
Is VNDetectFaceRectanglesRequest supported for DockKit tracking, or are there hidden requirements?
Would really appreciate any help or pointers – thanks!
That's my complete code:
extension VideoFeedViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
guard let frame = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
detectFace(image: frame)
func detectFace(image: CVPixelBuffer) {
let faceDetectionRequest = VNDetectFaceRectanglesRequest() { vnRequest, error in
guard let results = vnRequest.results as? [VNFaceObservation] else {
return
}
guard let observation = results.first else {
return
}
let boundingBoxHeight = observation.boundingBox.size.height * 100
#if canImport(DockKit)
if let dockAccessory = self.dockAccessory {
Task {
try? await trackRider(
observation.boundingBox,
dockAccessory,
frame,
sampleBuffer
)
}
}
#endif
}
let imageResultHandler = VNImageRequestHandler(cvPixelBuffer: image, orientation: .up)
try? imageResultHandler.perform([faceDetectionRequest])
func combineBoundingBoxes(_ box1: CGRect, _ box2: CGRect) -> CGRect {
let minX = min(box1.minX, box2.minX)
let minY = min(box1.minY, box2.minY)
let maxX = max(box1.maxX, box2.maxX)
let maxY = max(box1.maxY, box2.maxY)
let combinedWidth = maxX - minX
let combinedHeight = maxY - minY
return CGRect(x: minX, y: minY, width: combinedWidth, height: combinedHeight)
}
#if canImport(DockKit)
func trackObservation(_ boundingBox: CGRect, _ dockAccessory: DockAccessory, _ pixelBuffer: CVPixelBuffer, _ cmSampelBuffer: CMSampleBuffer) throws {
// Zähle den Aufruf
TrackMonitor.shared.trackCalled()
let invertedBoundingBox = CGRect(
x: boundingBox.origin.x,
y: 1.0 - boundingBox.origin.y - boundingBox.height,
width: boundingBox.width,
height: boundingBox.height
)
guard let device = captureDevice else {
fatalError("Kamera nicht verfügbar")
}
let size = CGSize(width: Double(CVPixelBufferGetWidth(pixelBuffer)),
height: Double(CVPixelBufferGetHeight(pixelBuffer)))
var cameraIntrinsics: matrix_float3x3? = nil
if let cameraIntrinsicsUnwrapped = CMGetAttachment(
sampleBuffer,
key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix,
attachmentModeOut: nil
) as? Data {
cameraIntrinsics = cameraIntrinsicsUnwrapped.withUnsafeBytes { $0.load(as: matrix_float3x3.self) }
}
Task {
let orientation = getCameraOrientation()
let cameraInfo = DockAccessory.CameraInformation(
captureDevice: device.deviceType,
cameraPosition: device.position,
orientation: orientation,
cameraIntrinsics: cameraIntrinsics,
referenceDimensions: size
)
let observation = DockAccessory.Observation(
identifier: 0,
type: .object,
rect: invertedBoundingBox
)
let observations = [observation]
guard let image = CMSampleBufferGetImageBuffer(sampleBuffer) else {
print("no image")
return
}
do {
try await dockAccessory.track(observations, cameraInformation: cameraInfo)
} catch {
print(error)
}
}
}
#endif
func clearDrawings() {
boundingBoxLayer?.removeFromSuperlayer()
boundingBoxSizeLayer?.removeFromSuperlayer()
}
}
}
}
@MainActor
private func getCameraOrientation() -> DockAccessory.CameraOrientation {
switch UIDevice.current.orientation {
case .portrait:
return .portrait
case .portraitUpsideDown:
return .portraitUpsideDown
case .landscapeRight:
return .landscapeRight
case .landscapeLeft:
return .landscapeLeft
case .faceDown:
return .faceDown
case .faceUp:
return .faceUp
default:
return .corrected
}
}
Hi, I just upgraded to macOS Tahoe Beta 2 and now I'm getting this error when I try to initialize my Foundation Models' session:
Error Resource (Local Sanitizer Asset) unavailable error.
import FoundationModels
#Playground {
let session = LanguageModelSession()
do {
let result = try await session.respond(to: "Tell me 3 colors")
print(result.content)
} catch {
print("Error", error)
}
}
I couldn't find any resource guiding me on how to solve this. Any help/workaround?
Thank you!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Our app is downloading a zip of an .mlpackage file, which is then compiled into an .mlmodelc file using MLModel.compileModel(at:). This model is then run using a VNCoreMLRequest.
Two users – and this after a very small rollout - are reporting issues running the VNCoreMLRequest. The error message from their logs:
Error Domain=com.apple.CoreML Code=0 "Failed to build the model execution plan using a model architecture file '/private/var/mobile/Containers/Data/Application/F93077A5-5508-4970-92A6-03A835E3291D/Documents/SKDownload/Identify-image-iOS/mobile_img_eu_v210.mlmodelc/model.mil' with error code: -5."
The URL there is to a file inside the compiled model. The error is happening when the perform function of VNImageRequestHandler is run. (i.e. the model compiled without an error.)
Anyone else seen this issue? Its only picked up in a few web results and none of them are directly relevant or have a fix.
I know that a CoreML error Code=0 is a generic error, but does anyone know what error code -5 is? Not even sure which framework its coming from.
Hi, recently i tried to fine-tune Gemma-2-2b mlx model on my macbook (24 GB UMA). The code started running, after few seconds i saw swap size reaching 50GB and ram around 23 GB and then it stopped. I ran the Gemma-2-2b (cuda) on colab, it ran and occupied 27 GB on A100 gpu and worked fine. Here i didn't experienced swap issue.
Now my question is if my UMA was more than 27 GB, i also would not have experienced swap disk issue.
Thanks.
Topic:
Machine Learning & AI
SubTopic:
General
Hi all, I am interested in unlocking unique applications with the new foundational models. I have a few questions regarding the availability of the following features:
Image Input: The update in June 2025 mentions "image" 44 times (https://machinelearning.apple.com/research/apple-foundation-models-2025-updates) - however I can't seem to find any information about having images as the input/prompt for the foundational models. When will this be available? I understand that there are existing Vision ML APIs, but I want image input into a multimodal on-device LLM (VLM) instead for features like "Which player is holding the ball in the image", etc (image understanding)
Cloud Foundational Model - when will this be available?
Thanks!
Clement :)
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Vision
Machine Learning
Core ML
Apple Intelligence
I am using the iPhone 17 Pro simulator that was included with Xcode 26.0.1. My Mac is running macOS 26. When I started the simulator for the first time I got the "Ready for Apple Intelligence" notification but when I access Image Playground in my app it says it is not available on this iPhone. Any solution to get it working on the simulator?
Pretty much as per the title and I suspect I know the answer. Given that Foundation Models run on device, is it possible to use Foundation Models framework inside of a DeviceActivityReport? I've been tinkering with it, and all I get is errors and "Sandbox restrictions". Am I missing something? Seems like a missed trick to utilise on device AI/ML with other frameworks.
Hi team,
I’m exploring the Model Context Protocol (MCP), which is used to connect LLMs/AI agents to external tools in a structured way. It's becoming a common standard for automation and agent workflows.
Before I go deeper, I want to confirm:
Does Apple currently provide any official MCP server, API surface, or SDK on iOS/macOS?
From what I see, only third-party MCP servers exist for iOS simulators/devices, and Apple’s own frameworks (Foundation Models, Apple Intelligence) don’t expose MCP endpoints.
Is there any chance Apple might introduce MCP support—or publish recommended patterns for safely integrating MCP inside apps or developer tools?
I would like to see if I can share my app's data to the MCP server to enable other third-party apps/services to integrate easily
Topic:
Machine Learning & AI
SubTopic:
General
On macOS Tahoe26.0, iOS 26.0 (23A5287g) not emulator, Xcode 26.0 beta 3 (17A5276g)
Follow this tutorial Testing your asset packs locally The start the test server command I use this command line to start the test server:xcrun ba-serve --host 192.168.0.109 test.aar The terminal showThe content displayed on the terminal is: Loading asset packs…
Loading the asset pack at “test.aar”…
Listening on port 63125…… Choose an identity in the panel to continue. Listening on port 63125…
running the project, Xcode reports an error:Download failed: Could not connect to the server. I use iPhone safari visit this website: https://192.168.0.109:63125, on the page display "Hello, world!"
There are too few error messages in both of the above questions. I have no idea what the specific reasons are.I hope someone can offer some guidance. Best Regards.
{
"assetPackID": "testVideoAssetPack",
"downloadPolicy": {
"prefetch": {
"installationEventTypes": ["firstInstallation", "subsequentUpdate"]
}
},
"fileSelectors": [
{
"file": "video/test.mp4"
}
],
"platforms": [
"iOS"
]
}
this is my Manifest.json
I'm using a custom create ML model to classify the movement of a user's hand in a game,
The classifier has 3 different spell movements, but my code constantly predicts all of them at an equal 1/3 probability regardless of movement which leads me to believe my code isn't correct (as opposed to the model) which in CreateML at least gives me a heavily weighted prediction
My code is below.
On adding debug prints everywhere all the data looks good to me and matches similar to my test CSV data
So I'm thinking my issue must be in the setup of my model code?
/// Feeds samples into the model and keeps a sliding window of the last N frames.
final class WandGestureStreamer {
static let shared = WandGestureStreamer()
private let model: SpellActivityClassifier
private var samples: [Transform] = []
private let windowSize = 100 // number of frames the model expects
/// RNN hidden state passed between inferences
private var stateIn: MLMultiArray
/// Last transform dropped from the window for continuity
private var lastDropped: Transform?
private init() {
let config = MLModelConfiguration()
self.model = try! SpellActivityClassifier(configuration: config)
// Initialize stateIn to the model’s required shape
let constraint = self.model.model.modelDescription
.inputDescriptionsByName["stateIn"]!
.multiArrayConstraint!
self.stateIn = try! MLMultiArray(shape: constraint.shape, dataType: .double)
}
/// Call once per frame with the latest wand position (or any feature vector).
func appendSample(_ sample: Transform) {
samples.append(sample)
// drop oldest frame if over capacity, retaining it for delta at window start
if samples.count > windowSize {
lastDropped = samples.removeFirst()
}
}
func classifyIfReady(threshold: Double = 0.6) -> (label: String, confidence: Double)? {
guard samples.count == windowSize else { return nil }
do {
let input = try makeInput(initialState: stateIn)
let output = try model.prediction(input: input)
// Save state for continuity
stateIn = output.stateOut
let best = output.label
let conf = output.labelProbability[best] ?? 0
// If you’ve recognized a gesture with high confidence:
if conf > threshold {
return (best, conf)
} else {
return nil
}
} catch {
print("Error", error.localizedDescription, error)
return nil
}
}
/// Constructs a SpellActivityClassifierInput from recorded wand transforms.
func makeInput(initialState: MLMultiArray) throws -> SpellActivityClassifierInput {
let count = samples.count as NSNumber
let shape = [count]
let timeArr = try MLMultiArray(shape: shape, dataType: .double)
let dxArr = try MLMultiArray(shape: shape, dataType: .double)
let dyArr = try MLMultiArray(shape: shape, dataType: .double)
let dzArr = try MLMultiArray(shape: shape, dataType: .double)
let rwArr = try MLMultiArray(shape: shape, dataType: .double)
let rxArr = try MLMultiArray(shape: shape, dataType: .double)
let ryArr = try MLMultiArray(shape: shape, dataType: .double)
let rzArr = try MLMultiArray(shape: shape, dataType: .double)
for (i, sample) in samples.enumerated() {
let previousSample = i > 0 ? samples[i - 1] : lastDropped
let model = WandMovementRecording.DataModel(transform: sample, previous: previousSample)
// print("model", model)
timeArr[i] = NSNumber(value: model.timestamp)
dxArr[i] = NSNumber(value: model.dx)
dyArr[i] = NSNumber(value: model.dy)
dzArr[i] = NSNumber(value: model.dz)
let rot = model.rotation
rwArr[i] = NSNumber(value: rot.w)
rxArr[i] = NSNumber(value: rot.x)
ryArr[i] = NSNumber(value: rot.y)
rzArr[i] = NSNumber(value: rot.z)
}
return SpellActivityClassifierInput(
dx: dxArr, dy: dyArr, dz: dzArr,
rotation_w: rwArr, rotation_x: rxArr, rotation_y: ryArr, rotation_z: rzArr,
timestamp: timeArr,
stateIn: initialState
)
}
}