Post

Replies

Boosts

Views

Activity

Reply to CoreML Inference Acceleration
Instruments is your friend. Check this WWDC video: https://developer.apple.com/videos/play/wwdc2023/10049. Core ML used to serialize predictions per MLModel instance. In recent years this per-instance lock has been relaxed, but the optimization is often available only for the newer model type (ML Program) and API usage (async predictions.) Using Instruments, we can see which activities are serialized and make an informed decision to utilize the compute resource.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Sep ’25
Reply to Crash inside of Vision predictWithCVPixelBuffer - Crashed: com.apple.VN.detectorSyncTasksQueue.VNCoreMLTransformer
On the crash, can you tell what Vision.framework is doing? On my iOS 18.5 (I don't have iPad, but I suppose it's similar, if not same.), the crash point (-[VNCoreMLModel predictWithCVPixelBuffer:options:error:] + 148) is attempting to access .model property of VNCoreMLModel object. (lldb) dis -s 0x00000001bc61626c -e 0x00000001bc61626c+148 Vision`-[VNCoreMLModel predictWithCVPixelBuffer:options:error:]: : ;; x0 is self, which is `VNCoreMLModel` object. 0x1bc616290 <+36>: mov x23, x0 : ;; Accessing `.model` property of the `VNCoreMLModel` object. 0x1bc6162f8 <+140>: mov x0, x23 0x1bc6162fc <+144>: bl 0x1bc857f60 ; objc_msgSend$model ;; Return address 0x1bc616300 <+148>: bl 0x1bdf7d200 If you see the same, I might suspect the MLModel object stored in CoreMLModelContainer is somehow broken, perhaps overreleased. (It's hard to tell whether it's a framework bug or application's bug. )
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Jul ’25
Reply to Is there an API to check if a Core ML compiled model is already cached?
No, they do not have such an API as of iOS 18. A model cache is reasonably sticky as long as the absolute path of the model (.mlmodelc) stays same. However, on app update, the system often gives a new app sandbox directory, which changes the path to the model. (I wish Apple could make the cache logic less sensitive to such a change.) If it is an option, I would run the background pre-load only after the app update because it seems like a major reason of the cache invalidation.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
May ’25
Reply to Regression in EnumeratedShaped support in recent MacOS release
Maybe does your model uses enumeratedShapes message of just one entry for the "static shape", instead of the normal static shape? According to their document (https://apple.github.io/coremltools/docs-guides/source/flexible-inputs.html), For a multi-input model, only one of the inputs can be marked with EnumeratedShapes; the rest must have fixed single shapes. If you require multiple inputs to be flexible, set the range for each dimension.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
May ’25
Reply to A specific mlmodelc model runs on iPhone 15, but not on iPhone 16
I think it's worth filing a feedback assistance to Apple with model attached. Also, you may want to tweak .specializationStrategy setting (https://developer.apple.com/documentation/coreml/mloptimizationhints-swift.struct/specializationstrategy-swift.property) because it affects how Apple Neural Engine specializes your model to the hardware specific executable code.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
May ’25
Reply to KV-Cache MLState Not Updating During Prefill Stage in Core ML LLM Inference
Core ML framework is agnostic to LLM's specific setup such as pre-filling step vs single token generation steps. It just executes whatever the model (ML Program) says. Given that the state (KV-cache) is updated in single token generation steps, I would suspect that ML Program for the prompt step has a bug where it does not update KV-cache. Another thing worth verifying is that you need to use the same MLState instance in both pre-filling and token generations. As for your questions, Not I know of. No. Each prediction call fully completes the state update. After pre-filling prediction call, you can examine the contents of the state by withMultiArray(for:), but I suppose you already did that.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
May ’25
Reply to CoreML Inference Acceleration
Instruments is your friend. Check this WWDC video: https://developer.apple.com/videos/play/wwdc2023/10049. Core ML used to serialize predictions per MLModel instance. In recent years this per-instance lock has been relaxed, but the optimization is often available only for the newer model type (ML Program) and API usage (async predictions.) Using Instruments, we can see which activities are serialized and make an informed decision to utilize the compute resource.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Replies
Boosts
Views
Activity
Sep ’25
Reply to Crash inside of Vision predictWithCVPixelBuffer - Crashed: com.apple.VN.detectorSyncTasksQueue.VNCoreMLTransformer
On the crash, can you tell what Vision.framework is doing? On my iOS 18.5 (I don't have iPad, but I suppose it's similar, if not same.), the crash point (-[VNCoreMLModel predictWithCVPixelBuffer:options:error:] + 148) is attempting to access .model property of VNCoreMLModel object. (lldb) dis -s 0x00000001bc61626c -e 0x00000001bc61626c+148 Vision`-[VNCoreMLModel predictWithCVPixelBuffer:options:error:]: : ;; x0 is self, which is `VNCoreMLModel` object. 0x1bc616290 <+36>: mov x23, x0 : ;; Accessing `.model` property of the `VNCoreMLModel` object. 0x1bc6162f8 <+140>: mov x0, x23 0x1bc6162fc <+144>: bl 0x1bc857f60 ; objc_msgSend$model ;; Return address 0x1bc616300 <+148>: bl 0x1bdf7d200 If you see the same, I might suspect the MLModel object stored in CoreMLModelContainer is somehow broken, perhaps overreleased. (It's hard to tell whether it's a framework bug or application's bug. )
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Replies
Boosts
Views
Activity
Jul ’25
Reply to Memory stride warning when loading CoreML models on ANE
This particular message appears to come from one of the underlying private frameworks. (You can check the metadata of the message using log stream command.) You can safely ignore it unless you have some real problems.
Replies
Boosts
Views
Activity
Jul ’25
Reply to ActivityClassifier doesn't classify movement
I don't see any obvious error in the code, but is it OK to use zeros as the initial self.stateIn value? (The default initializer of MLMultiArray does that.) Some models require the initial state values to be non-zeros (e.g. random values between 0.0 - 1.0).
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Replies
Boosts
Views
Activity
Jul ’25
Reply to Is there an API to check if a Core ML compiled model is already cached?
even when the compiled model path remains unchanged after an APP update Sorry, I missed this part. How did you verify the model path stayed same through the app update? We should look at URL object's fileSystemRepresentation property (https://developer.apple.com/documentation/foundation/url/withunsafefilesystemrepresentation(_:)).
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Replies
Boosts
Views
Activity
May ’25
Reply to Is there an API to check if a Core ML compiled model is already cached?
No, they do not have such an API as of iOS 18. A model cache is reasonably sticky as long as the absolute path of the model (.mlmodelc) stays same. However, on app update, the system often gives a new app sandbox directory, which changes the path to the model. (I wish Apple could make the cache logic less sensitive to such a change.) If it is an option, I would run the background pre-load only after the app update because it seems like a major reason of the cache invalidation.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Replies
Boosts
Views
Activity
May ’25
Reply to Regression in EnumeratedShaped support in recent MacOS release
Maybe does your model uses enumeratedShapes message of just one entry for the "static shape", instead of the normal static shape? According to their document (https://apple.github.io/coremltools/docs-guides/source/flexible-inputs.html), For a multi-input model, only one of the inputs can be marked with EnumeratedShapes; the rest must have fixed single shapes. If you require multiple inputs to be flexible, set the range for each dimension.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Replies
Boosts
Views
Activity
May ’25
Reply to What is the proper way to integrate a CoreML app into Xcode
Did you verify the input image (the CVPixelBuffer converted from UIImage using your toCVPixelBuffer function) looks correct? You can visualize the contents of CVPixelBuffer using Xcode debugger's Quick Look feature.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Replies
Boosts
Views
Activity
May ’25
Reply to A specific mlmodelc model runs on iPhone 15, but not on iPhone 16
I think it's worth filing a feedback assistance to Apple with model attached. Also, you may want to tweak .specializationStrategy setting (https://developer.apple.com/documentation/coreml/mloptimizationhints-swift.struct/specializationstrategy-swift.property) because it affects how Apple Neural Engine specializes your model to the hardware specific executable code.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Replies
Boosts
Views
Activity
May ’25
Reply to KV-Cache MLState Not Updating During Prefill Stage in Core ML LLM Inference
Core ML framework is agnostic to LLM's specific setup such as pre-filling step vs single token generation steps. It just executes whatever the model (ML Program) says. Given that the state (KV-cache) is updated in single token generation steps, I would suspect that ML Program for the prompt step has a bug where it does not update KV-cache. Another thing worth verifying is that you need to use the same MLState instance in both pre-filling and token generations. As for your questions, Not I know of. No. Each prediction call fully completes the state update. After pre-filling prediction call, you can examine the contents of the state by withMultiArray(for:), but I suppose you already did that.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Replies
Boosts
Views
Activity
May ’25