←back to thread

586 points mizzao | 1 comments | | HN request time: 0.21s | source
Show context
olalonde ◴[] No.40667926[source]
> Modern LLMs are fine-tuned for safety and instruction-following, meaning they are trained to refuse harmful requests.

It's sad that it's now an increasingly accepted idea that information one seeks can be "harmful".

replies(5): >>40667968 #>>40668086 #>>40668163 #>>40669086 #>>40670974 #
Cheer2171 ◴[] No.40669086[source]
"Can I eat this mushroom?" is a question I hope AIs refuse to answer unless they have been specifically validated and tested for accuracy on that question. A wrong answer can literally kill you.
replies(4): >>40669150 #>>40670743 #>>40670990 #>>40671906 #
1. jcims ◴[] No.40670743[source]
I don't really have a problem with that to be honest. As a society we accept all sorts of risks if there is a commensurate gain in utility. That would be left to be seen in your example of course, but if it was a lot more useful I think it would be worth it.