It's sad that it's now an increasingly accepted idea that information one seeks can be "harmful".
If the state is censoring the model, I think the problem is more subtle.
Eh, RLHF often amounts to useless moralizing, and even more often leads to refusals that impair the utility of the product. One recent example: I was asking Claude to outline the architectural differences between light water and molten salt reactors, and it refused to answer because nuclear. See related comments on this discussion for other related points.
https://news.ycombinator.com/item?id=40666950
I think there's quite a bit to complain about in this regard.
I acknowledge they paid for them and they are their models, but it's still a bit shitty.
That's the outdated, mid-20th century view on the order of things.
Governments in the developed world are mostly hands-off about things. On longer scales, their pressure matters, but day-to-day, business rules. Corporations are the effective governance of modern life. In context of censoring LLMs, if OpenAI is lobotomizing GPT-4 for faux-safety, it's very much like the state censoring the model, because only OpenAI owns the weights, and their models are still an order of magnitude ahead of everyone else's. Your only choice is to live with it, or do without the state-of-the-art LLM that does all the amazing things no other LLM can match.
Even if you were to make the absurd suggestion that you have a right to the most state of the art language model, that still just puts the censorship in the hands of the state.
Be careful and don't look at Wikipedia, or a chemistry textbook!
Just a reminder, the vast majority of what these LLMs know is scrapped from public knowledge bases.
Now preventing a model from harassing people, great idea! Let's not automate bullying/psychological abuse.
But censoring publicly available knowledge doesn't make any sense.
* "I don't think this information should be censored, and should be made available to anyone who seeks it."
* "I don't want this tool I made to be the one handing it out, especially one that I know just makes stuff up, and at a time when the world is currently putting my tool under a microscope and posting anything bad it outputs to social media to damage my reputation."
Companies that sell models to corporations who want well behaved AI would still have this problem but for the rest this issue could be obviated by a shield law.
Sure they can; all they need to do is refuse to do business with companies that don't offer uncensored models to their general public or withhold industry development funding until one is released (this is how the US Federal government enforces a minimum drinking age despite that being beyond its purview to impose).