←back to thread

586 points mizzao | 1 comments | | HN request time: 0.207s | source
Show context
olalonde ◴[] No.40667926[source]
> Modern LLMs are fine-tuned for safety and instruction-following, meaning they are trained to refuse harmful requests.

It's sad that it's now an increasingly accepted idea that information one seeks can be "harmful".

replies(5): >>40667968 #>>40668086 #>>40668163 #>>40669086 #>>40670974 #
nathan_compton ◴[] No.40668086[source]
This specific rhetoric aside, I really don't have any problem with people censoring their models. If I, as an individual, had the choice between handing out instructions on how to make sarin gas on the street corner or not doing it, I'd choose the latter. I don't think the mere information is itself harmful, but I can see that it might have some bad effects in the future. That seems to be all it comes down to. People making models have decided they want the models to behave a certain way. They paid to create them and you don't have a right to have a model that will make racist jokes or whatever. So unless the state is censoring models, I don't see what complaint you could possibly have.

If the state is censoring the model, I think the problem is more subtle.

replies(6): >>40668143 #>>40668146 #>>40668556 #>>40668753 #>>40669343 #>>40672487 #
fallingknife ◴[] No.40668753[source]
If the limit of censoring the model was preventing it from answering questions about producing harmful materials that would be fine with me. But you know that your example is really not what people are complaining about when they talk about LLM censorship.
replies(1): >>40669251 #
1. nathan_compton ◴[] No.40669251[source]
What are they complaining about?