←back to thread

371 points ulrischa | 3 comments | | HN request time: 0s | source
Show context
t_mann ◴[] No.43235506[source]
Hallucinations themselves are not even the greatest risk posed by LLMs. A much greater risk (in simple terms of probability times severity) I'd say is that chat bots can talk humans into harming themselves or others. Both of which have already happened, btw [0,1]. Still not sure if I'd call that the greatest overall risk, but my ideas for what could be even more dangerous I don't even want to share here.

[0] https://www.qut.edu.au/news/realfocus/deaths-linked-to-chatb...

[1] https://www.theguardian.com/uk-news/2023/jul/06/ai-chatbot-e...

replies(4): >>43235623 #>>43236225 #>>43238379 #>>43238746 #
zahlman ◴[] No.43238746[source]
Is this somehow worse than humans talking each other into it?
replies(1): >>43241362 #
1. skywhopper ◴[] No.43241362[source]
Yes.
replies(1): >>43243106 #
2. zahlman ◴[] No.43243106[source]
How?
replies(1): >>43245556 #
3. krupan ◴[] No.43245556[source]
Does this really have to be spelled out?? Because a single human can only intimately converse with and convince a small number of people, while an LLM can do that with thousands (what is the upper limit even?) of people at a time.

Also, because AI is being relentlessly marketed as being better than humans, thereby encouraging people to trust it even more than they might a fellow human.