Post

Replies

Boosts

Views

Activity

Reply to Does using Vision API offline to label a custom dataset for Core ML training violate DPLA?
I've done something similar — used Vision framework outputs to build training labels for a custom audio-visual alignment model. As long as you're using the API as documented and shipping your own model (not redistributing Apple's), you're fine. The DPLA restriction is about reverse-engineering the framework internals, not about using its outputs as training signal. Never had App Review pushback on this.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
3w
Reply to AI framework usage without user session
We ran CoreML inference from a launch daemon (no user session) for about a year — it works but with caveats. ANE access is unreliable without a session, so you'll likely fall back to CPU/GPU compute units. Vision framework calls that touch CoreGraphics can deadlock if there's no window server connection. Our workaround was forcing .cpuOnly for the daemon path and keeping the GPU/ANE path for the user-facing XPC.
Topic: Machine Learning & AI SubTopic: General Tags:
3w
Reply to Is anyone working on jax-metal?
Still broken as of early 2026 in my testing. For JAX workloads on Apple Silicon I've moved to MLX entirely — the API is different but the Metal backend actually works and gets regular updates. For anything that must stay in JAX, CPU fallback is unfortunately the only reliable path on macOS right now.
Topic: Machine Learning & AI SubTopic: General Tags:
3w
Reply to Core Spotlight Semantic Search - still non-functional for 1+ year after WWDC24?
Same experience here — CSSearchableItem with semanticDescription populated, index looks fine in the debug console, but semantic queries return nothing useful. Filed a Feedback last year and got silence. At this point I'm embedding my own vectors via sentence-transformers on CoreML and doing the similarity search manually. More work but at least it actually functions.
3w
Reply to SpeechAnalyzer speech to text wwdc sample app
Hit this in production. The root cause is a locale format mismatch — Locale.current.identifier returns underscores (en_US) but the internal allocation table uses hyphens (en-US). Even after the beta 3 fix I still see it intermittently with en-GB when device region differs from language setting. Skipping installed(locale:) and calling downloadIfNeeded() directly is the safest workaround.Hit this in production. The root cause is a locale format mismatch — Locale.current.identifier returns underscores (en_US) but the internal allocation table uses hyphens (en-US). Even after the beta 3 fix I still see it intermittently with en-GB when device region differs from language setting. Skipping installed(locale:) and calling downloadIfNeeded() directly is the safest workaround.
Topic: Media Technologies SubTopic: Audio Tags:
3w
Reply to How to use the SpeechDetector Module
The type cast workaround is clever but fragile — it relies on the internal conformance being present at runtime even though the compiler cannot see it. If you need voice activity detection before the fix ships, AVAudioEngine installTap with vDSP_measqv for RMS metering is a solid fallback. About 10 lines of code and you get sub-10ms detection without depending on the Speech framework at all.
Topic: Media Technologies SubTopic: Audio Tags:
3w
Reply to After loading my custom model - unsupportedTokenizer error
Tokenizer breakage across mlx versions is a recurring pain point — the tokenizer factory gets updated without guaranteed backward compat for custom-fused models. Check if tokenizer_config.json in your fused model specifies a tokenizer_class that 2.29.1 still recognizes. Manually setting the tokenizer type in LLMModelFactory registration usually gets around it.
Replies
Boosts
Views
Activity
3w
Reply to Does using Vision API offline to label a custom dataset for Core ML training violate DPLA?
I've done something similar — used Vision framework outputs to build training labels for a custom audio-visual alignment model. As long as you're using the API as documented and shipping your own model (not redistributing Apple's), you're fine. The DPLA restriction is about reverse-engineering the framework internals, not about using its outputs as training signal. Never had App Review pushback on this.
Topic: Machine Learning & AI SubTopic: Core ML Tags:
Replies
Boosts
Views
Activity
3w
Reply to AI framework usage without user session
We ran CoreML inference from a launch daemon (no user session) for about a year — it works but with caveats. ANE access is unreliable without a session, so you'll likely fall back to CPU/GPU compute units. Vision framework calls that touch CoreGraphics can deadlock if there's no window server connection. Our workaround was forcing .cpuOnly for the daemon path and keeping the GPU/ANE path for the user-facing XPC.
Topic: Machine Learning & AI SubTopic: General Tags:
Replies
Boosts
Views
Activity
3w
Reply to Is anyone working on jax-metal?
Still broken as of early 2026 in my testing. For JAX workloads on Apple Silicon I've moved to MLX entirely — the API is different but the Metal backend actually works and gets regular updates. For anything that must stay in JAX, CPU fallback is unfortunately the only reliable path on macOS right now.
Topic: Machine Learning & AI SubTopic: General Tags:
Replies
Boosts
Views
Activity
3w
Reply to Core Spotlight Semantic Search - still non-functional for 1+ year after WWDC24?
Same experience here — CSSearchableItem with semanticDescription populated, index looks fine in the debug console, but semantic queries return nothing useful. Filed a Feedback last year and got silence. At this point I'm embedding my own vectors via sentence-transformers on CoreML and doing the similarity search manually. More work but at least it actually functions.
Replies
Boosts
Views
Activity
3w
Reply to SpeechAnalyzer speech to text wwdc sample app
Hit this in production. The root cause is a locale format mismatch — Locale.current.identifier returns underscores (en_US) but the internal allocation table uses hyphens (en-US). Even after the beta 3 fix I still see it intermittently with en-GB when device region differs from language setting. Skipping installed(locale:) and calling downloadIfNeeded() directly is the safest workaround.Hit this in production. The root cause is a locale format mismatch — Locale.current.identifier returns underscores (en_US) but the internal allocation table uses hyphens (en-US). Even after the beta 3 fix I still see it intermittently with en-GB when device region differs from language setting. Skipping installed(locale:) and calling downloadIfNeeded() directly is the safest workaround.
Topic: Media Technologies SubTopic: Audio Tags:
Replies
Boosts
Views
Activity
3w
Reply to How to use the SpeechDetector Module
The type cast workaround is clever but fragile — it relies on the internal conformance being present at runtime even though the compiler cannot see it. If you need voice activity detection before the fix ships, AVAudioEngine installTap with vDSP_measqv for RMS metering is a solid fallback. About 10 lines of code and you get sub-10ms detection without depending on the Speech framework at all.
Topic: Media Technologies SubTopic: Audio Tags:
Replies
Boosts
Views
Activity
3w