←back to thread

371 points ulrischa | 1 comments | | HN request time: 0.203s | source
Show context
t_mann ◴[] No.43235506[source]
Hallucinations themselves are not even the greatest risk posed by LLMs. A much greater risk (in simple terms of probability times severity) I'd say is that chat bots can talk humans into harming themselves or others. Both of which have already happened, btw [0,1]. Still not sure if I'd call that the greatest overall risk, but my ideas for what could be even more dangerous I don't even want to share here.

[0] https://www.qut.edu.au/news/realfocus/deaths-linked-to-chatb...

[1] https://www.theguardian.com/uk-news/2023/jul/06/ai-chatbot-e...

replies(4): >>43235623 #>>43236225 #>>43238379 #>>43238746 #
hexaga ◴[] No.43236225[source]
More generally - AI that is good at convincing people is very powerful, and powerful things are dangerous.

I'm increasingly coming around to the notion that AI tooling should have safety features concerned with not directly exposing humans to asymptotically increasing levels of 'convincingness' in generated output. Something like a weaker model used as a buffer.

Projecting out to 5-10 years: what happens when LLMs are still producing hallucinatory semi-sense, but merely comprehending it makes the machine temporarily own you? A bit like getting hair caught in an angle grinder, that.

Like most safety regulations, it'll take blood for the inking. Exposing mass numbers of people to these models strikes me as wildly negligent if we expect continued improvement along this axis.

replies(2): >>43236968 #>>43238275 #
kjs3 ◴[] No.43236968[source]
Yeah...this. I'm not so concerned that AI is going to put me out of a job or become Skynet. I'm concerned that people are offloading decision making and critical thinking to the AI, accepting it's response at face value and responding to concerns with "the AI said so...must be right". Companies are already maliciously exploiting this (e.g. the AI has denied your medical claim, and we can't tell you how it decided that because our models are trade secrets), but it will soon become de rigueur and people will think you're weird for questioning the wisdom of the AI.
replies(1): >>43237485 #
1. Nevermark ◴[] No.43237485[source]
The combination of blind faith in AI, and good faith highly informed understanding and agreement, achieved with help of AI, covers the full spectrum of the problem.