[0] https://www.qut.edu.au/news/realfocus/deaths-linked-to-chatb...
[1] https://www.theguardian.com/uk-news/2023/jul/06/ai-chatbot-e...
[0] https://www.qut.edu.au/news/realfocus/deaths-linked-to-chatb...
[1] https://www.theguardian.com/uk-news/2023/jul/06/ai-chatbot-e...
I'm increasingly coming around to the notion that AI tooling should have safety features concerned with not directly exposing humans to asymptotically increasing levels of 'convincingness' in generated output. Something like a weaker model used as a buffer.
Projecting out to 5-10 years: what happens when LLMs are still producing hallucinatory semi-sense, but merely comprehending it makes the machine temporarily own you? A bit like getting hair caught in an angle grinder, that.
Like most safety regulations, it'll take blood for the inking. Exposing mass numbers of people to these models strikes me as wildly negligent if we expect continued improvement along this axis.
Seriously? Do you suppose that it will pull this trick off through some sort of hypnotizing magic perhaps? I have a hard time imagining any sort of overly verbose, clause and condition-ridden chatbot convincing anyone of sound mind to seriously harm themselves or do some egregiously stupid/violent thing.
The kinds of people who would be convinced by such "dangers" are likely to be mentally unstable or suggestible enough about it to in any case be convinced by any number of human beings anyhow.
Aside from demonstrating the persistent AI woo that permeats many comments on this site, the logic above reminds me of the harping nonsense around the supposed dangers of video games or certain violent movies "making kinds do bad things", in years past. The prohibitionist nanny tendencies behind such fears are more dangerous than any silly chatbot AI..
These are people who have jobs and apartments and are able to post online about their problems in complete sentences. If they're not "of sound mind," we have a lot more mentally unstable people running around than we like to think we do.
So what do you believe should be the case? That AI in any flexible communicative form be limited to a select number of people who can prove they're of sound enough mind to use it unfiltered?
You see how similar this is to historical nonsense about restricting the loaning or sale of books on certain subjects only to people of a certain supposed caliber or authority? Or banning the production and distribution of movies that were claimed to be capable of corrupting minds into committing harmful and immoral acts. How stupid do these historical restrictions look today in any modern society? That's how stupid this harping about the dangers of AI chatbots will look down the road.
The limitation of AI because it may or may not cause some people to do irrational things not only smacks of a persistent AI woo on this site, which drastically overstates the power of these stochastic parrot systems, but also seems to forget that we live in a world in which all kinds of information triggers could maybe make someone make stupid choices. These include books, movies, and all kinds of other content produced far more effectively and with greater emotional impact by completely human authors.
By claiming a need for regulating the supposed information and discourse dangers of AI chat systems, you're not only serving the cynically fear-mongering arguments of major AI companies who would love such a regulatory moat around their overvalued pet projects, you're also tacitly claiming that literature, speech and other forms of written, spoken or digitally produced expression should be restricted unless they stick to the banally harmless, by some very vague definitions of what exactly harmful content even is.
In sum, fuck that and the entire chain of implicit long-used censorship, moralizing nannyism, potential for speech restriction and legal over-reach that it so bloody obviously entails.