←back to thread

586 points mizzao | 4 comments | | HN request time: 0.821s | source
Show context
olalonde ◴[] No.40667926[source]
> Modern LLMs are fine-tuned for safety and instruction-following, meaning they are trained to refuse harmful requests.

It's sad that it's now an increasingly accepted idea that information one seeks can be "harmful".

replies(5): >>40667968 #>>40668086 #>>40668163 #>>40669086 #>>40670974 #
nathan_compton ◴[] No.40668086[source]
This specific rhetoric aside, I really don't have any problem with people censoring their models. If I, as an individual, had the choice between handing out instructions on how to make sarin gas on the street corner or not doing it, I'd choose the latter. I don't think the mere information is itself harmful, but I can see that it might have some bad effects in the future. That seems to be all it comes down to. People making models have decided they want the models to behave a certain way. They paid to create them and you don't have a right to have a model that will make racist jokes or whatever. So unless the state is censoring models, I don't see what complaint you could possibly have.

If the state is censoring the model, I think the problem is more subtle.

replies(6): >>40668143 #>>40668146 #>>40668556 #>>40668753 #>>40669343 #>>40672487 #
1. TeMPOraL ◴[] No.40669343[source]
> If the state is censoring the model, I think the problem is more subtle.

That's the outdated, mid-20th century view on the order of things.

Governments in the developed world are mostly hands-off about things. On longer scales, their pressure matters, but day-to-day, business rules. Corporations are the effective governance of modern life. In context of censoring LLMs, if OpenAI is lobotomizing GPT-4 for faux-safety, it's very much like the state censoring the model, because only OpenAI owns the weights, and their models are still an order of magnitude ahead of everyone else's. Your only choice is to live with it, or do without the state-of-the-art LLM that does all the amazing things no other LLM can match.

replies(1): >>40672093 #
2. nathan_compton ◴[] No.40672093[source]
I'm sympathetic to your point. I think Corpos have too much power. However, on this precise subject I really don't see what to do about it. The state can't mandate that they don't censor their models. Indeed, there is no good definition at all of what not-censoring these models actually means. What is and is not allowed content? I tend to be rather libertarian on this subject, but if I were running a corporation I'd want to censor our models purely for business reasons.

Even if you were to make the absurd suggestion that you have a right to the most state of the art language model, that still just puts the censorship in the hands of the state.

replies(1): >>40676614 #
3. qball ◴[] No.40676614[source]
>The state can't mandate that they don't censor their models.

Sure they can; all they need to do is refuse to do business with companies that don't offer uncensored models to their general public or withhold industry development funding until one is released (this is how the US Federal government enforces a minimum drinking age despite that being beyond its purview to impose).

replies(1): >>40683385 #
4. nathan_compton ◴[] No.40683385{3}[source]
What does it mean to _not_ censor a model? That is the rub: is it censoring the model to exclude adult content from the training data? Is reinforcement learning to make the model friendly censorship? These models are tools and as tools they are tuned to do particular things and to not do other ones. There is no objective way to characterize what a censored model is.