It's sad that it's now an increasingly accepted idea that information one seeks can be "harmful".
It's sad that it's now an increasingly accepted idea that information one seeks can be "harmful".
If the state is censoring the model, I think the problem is more subtle.
Be careful and don't look at Wikipedia, or a chemistry textbook!
Just a reminder, the vast majority of what these LLMs know is scrapped from public knowledge bases.
Now preventing a model from harassing people, great idea! Let's not automate bullying/psychological abuse.
But censoring publicly available knowledge doesn't make any sense.
* "I don't think this information should be censored, and should be made available to anyone who seeks it."
* "I don't want this tool I made to be the one handing it out, especially one that I know just makes stuff up, and at a time when the world is currently putting my tool under a microscope and posting anything bad it outputs to social media to damage my reputation."
Companies that sell models to corporations who want well behaved AI would still have this problem but for the rest this issue could be obviated by a shield law.