Model Guardrails Too Restrictive?

I'm experimenting with using the Foundation Models framework to do news summarization in an RSS app but I'm finding that a lot of articles are getting kicked back with a vague message about guardrails.

This seems really common with political news but we're talking mainstream stuff, i.e. Politico, etc.

If the models are this restrictive, this will be tough to use. Is this intended?

FB17904424

Thanks for filing the feedback report. Just let you know that your report is now under the investigation of the Foundation Models framework team.

Best,
——
Ziqiao Chen
 Worldwide Developer Relations.

I updated the report with more examples. It's really, really sensitive.

Any news article about someone dying for instance, rejected.

They're insanely restrictive. I've filed multiple reports with examples that aren't even in the same universe as unsafe content.

If Apple doesn't fix them, the entire FoundationModels framework is essentially useless for a production app. You just can't ship something that fails 50% of the time with spurious "safety" violations.

Chiming in here as well - been playing around with some use cases around camping (when an offline assistant can be useful), but I am getting guardrail violations on more than 50% of prompts. Even talking about things like purifying water or where to position my tent.... Model is basically unusable with current level of guardrails

It feels like since Beta 3, the model seems to have become overly restrictive. Even simple tasks, like generating a title based on a content from a book I am reading or generating a summary from a longer citation are now blocked by guardrails.

It’s becoming nearly unusable for basic use cases. :(

I have experienced similar. I appreciate safety. But can we have options to set the guardrail level? This can work better.

Model Guardrails Too Restrictive?
 
 
Q