[0] https://www.qut.edu.au/news/realfocus/deaths-linked-to-chatb...
[1] https://www.theguardian.com/uk-news/2023/jul/06/ai-chatbot-e...
[0] https://www.qut.edu.au/news/realfocus/deaths-linked-to-chatb...
[1] https://www.theguardian.com/uk-news/2023/jul/06/ai-chatbot-e...
I have a hard time imagining any sort of overly verbose, clause and condition-ridden chatbot convincing anyone of sound mind to seriously harm themselves or do some egregiously stupid/violent thing.
The kinds of people who would be convinced by such "harm dangers" are likely to be mentally unstable or suggestible enough about it to in any case be convinced by any number of human beings, or by books, or movies or any other sort of excuse for a mind that had problems well before seeing X or Y.
By the logic of regulating AI for these supposed dangers, you could argue that literature, movie content, comic books, YouTube videos and that much loved boogeyman in previous years of violent video games should all be banned or regulated for the content they express.
Such notions have a strongly nannyish, prohibitionist streak that's much more dangerous than some algorithm and the bullshit it spews to a few suggestible individuals.
The media of course loves such narratives, because their breathless hysteria and contrived fear-mongering plays right into more eyeballs. Seeing people again take seriously such nonsense after idiocies like the media frenzy around video games in the early 2000s and prior to that, similar media fits about violent movies and even literature, is sort of sad.
We don't need our tools for expression, and sources of information "regulated for harm" because a small minority of others can't get an easy grip on their psychological state.
I'd love to see evidence of mental instability in "everyone" and its presence in many people is in any case no justification for what are in effect controls on freedom of speech and expression, just couched in a new boogeyman.