←back to thread

371 points ulrischa | 3 comments | | HN request time: 0.649s | source
Show context
t_mann ◴[] No.43235506[source]
Hallucinations themselves are not even the greatest risk posed by LLMs. A much greater risk (in simple terms of probability times severity) I'd say is that chat bots can talk humans into harming themselves or others. Both of which have already happened, btw [0,1]. Still not sure if I'd call that the greatest overall risk, but my ideas for what could be even more dangerous I don't even want to share here.

[0] https://www.qut.edu.au/news/realfocus/deaths-linked-to-chatb...

[1] https://www.theguardian.com/uk-news/2023/jul/06/ai-chatbot-e...

replies(4): >>43235623 #>>43236225 #>>43238379 #>>43238746 #
1. southernplaces7 ◴[] No.43238379[source]
In both of your linked examples, the people in question very likely had at least some sort of mental instability working in their minds.

I have a hard time imagining any sort of overly verbose, clause and condition-ridden chatbot convincing anyone of sound mind to seriously harm themselves or do some egregiously stupid/violent thing.

The kinds of people who would be convinced by such "harm dangers" are likely to be mentally unstable or suggestible enough about it to in any case be convinced by any number of human beings, or by books, or movies or any other sort of excuse for a mind that had problems well before seeing X or Y.

By the logic of regulating AI for these supposed dangers, you could argue that literature, movie content, comic books, YouTube videos and that much loved boogeyman in previous years of violent video games should all be banned or regulated for the content they express.

Such notions have a strongly nannyish, prohibitionist streak that's much more dangerous than some algorithm and the bullshit it spews to a few suggestible individuals.

The media of course loves such narratives, because their breathless hysteria and contrived fear-mongering plays right into more eyeballs. Seeing people again take seriously such nonsense after idiocies like the media frenzy around video games in the early 2000s and prior to that, similar media fits about violent movies and even literature, is sort of sad.

We don't need our tools for expression, and sources of information "regulated for harm" because a small minority of others can't get an easy grip on their psychological state.

replies(1): >>43241365 #
2. skywhopper ◴[] No.43241365[source]
Pretty much everyone has “some sort of mental instability working in their minds”.
replies(1): >>43250424 #
3. southernplaces7 ◴[] No.43250424[source]
Don't be obtuse. There are degrees of mental instability and no, some random person having a touch of it in very specific ways isn't the same as someone deciding to try killing the Queen of England because a chatbot said so. Most people wouldn't be quite that deluded in that context.

I'd love to see evidence of mental instability in "everyone" and its presence in many people is in any case no justification for what are in effect controls on freedom of speech and expression, just couched in a new boogeyman.