> Modern LLMs are fine-tuned for safety and instruction-following, meaning they are trained to refuse harmful requests.
It's sad that it's now an increasingly accepted idea that information one seeks can be "harmful".
replies(5):
It's sad that it's now an increasingly accepted idea that information one seeks can be "harmful".
If the state is censoring the model, I think the problem is more subtle.
I acknowledge they paid for them and they are their models, but it's still a bit shitty.