Tokenizer breakage across mlx versions is a recurring pain point — the tokenizer factory gets updated without guaranteed backward compat for custom-fused models. Check if tokenizer_config.json in your fused model specifies a tokenizer_class that 2.29.1 still recognizes. Manually setting the tokenizer type in LLMModelFactory registration usually gets around it.
Topic:
Machine Learning & AI
SubTopic:
General