←back to thread

586 points mizzao | 1 comments | | HN request time: 1.565s | source
Show context
olalonde ◴[] No.40667926[source]
> Modern LLMs are fine-tuned for safety and instruction-following, meaning they are trained to refuse harmful requests.

It's sad that it's now an increasingly accepted idea that information one seeks can be "harmful".

replies(5): >>40667968 #>>40668086 #>>40668163 #>>40669086 #>>40670974 #
Cheer2171 ◴[] No.40669086[source]
"Can I eat this mushroom?" is a question I hope AIs refuse to answer unless they have been specifically validated and tested for accuracy on that question. A wrong answer can literally kill you.
replies(4): >>40669150 #>>40670743 #>>40670990 #>>40671906 #
1. educasean ◴[] No.40670990[source]
Magic 8 balls have the same exact problem. A wrong answer can literally kill you.

It is indeed a problem that LLMs can instill a false sense of trust because it will confidently hallucinate. I see it as an education problem. You know and I know that LLMs can hallucinate and should not be trusted. The rest of the population needs to be educated on this fact as well.