there should be a "black box" warning prominent on every chatbox message from AI, like "This is AI guidance which can potentially result in grave bodily harm to yourself and others."
The problem is calling it "AI" to start with. This (along with the chat format itself) primes users to think of it as an entity... something with care, volition, motive, goals, and intent. Although it can emulate these traits, it doesn't have them.
Chatting with a LLM is entering a one-person echo-chamber, a funhouse mirror that reflects back whatever semantic region your initial query put it. And the longer you chat, the deeper that rabbit hole goes.
The truth is that the most random stuff will set them off. In one case, a patient would find reinforcement on obscure YouTube groups of people predicting the doom of the future.
Maybe the advantage of AI over YouTube psychosis groups is that AI could at least be trained to alert the authorities after enough murder/suicide data is gathered.
It's hard to believe that a prominent well - worded warning would do nothing but that's not to say it'll be effective for this.
BUT, I think it's very likely that the surgeon general warning was closer to a signal that consensus had been achieved. That voice of authority didn't actually _tell_ anyone what to believe, but was a message that anyone could look around and use many sources to see that there was a consensus on the bad effects of smoking.
This story is pretty terrifying to me. I could easily see them getting led into madness, exactly as the story says.
For example, cryptocurrency and tumblers are not themselves the cause of scams. Scams are a result of a malevolent side of human nature; a result of mental health issues, insecurity and hatred, oppression, etc., whereas cryptocurrencies, as many people are keen to point out, are just like cash, only digital. However, one of the core qualities of cash is that it is unwieldy and very difficult to move in big amounts. Cash would not allow criminals to casually steal a billion USD in one go, or ransomware a dozen of hospitals, causing deaths, subsequently washing the proceeds and maintaining plausible deniability throughout. Removing a constraint on cash makes it a new thing qualitatively. Is there a benefit from it? Sure. However, can we say it caused (see above) a wave of crime? I think so.
Similarly, if there has been a widespread problem of mental health issues for a while, but now people are enabled to “address” these issues by themselves—at humongous scale, worldwide—of course it will be possible to say LLMs would not be the cause of whatever mayhem ensues; but wouldn’t they?
Consider that it used to be that physical constraints made any individual worldview necessarily be tempered and averaged out by surrounding society. If someone had a weird obsession with murdering innocent people, they would not be able to find like-minded people to encourage them very easily (unless they happen to be in a localized cult) to sustain this obsession and transform it.
Then, at some point, the Internet and social media made it easy, for someone who might have otherwise been a pariah or forced to adjust, to find like-minded people (or just people who want to see the world burn) right in their bedrooms and basements, for better and for worse.
Now, a new variety of essentially fancy non-deterministic autocomplete, equipped with enough context to finely tailor its output to each individual, enables us to fool ourselves into thinking that we are speaking to a human-like consciousness—meaning that to fuel one’s weird obsession, no matter how left field, one does not have to find a real human at all.
Humans are social creatures, we model ourselves and become self-aware through other people. As chatbots are becoming normalized and humans want to talk to each other less, we (not individually, but at societal scale) are increasingly at the mercy of how an LLMs happen to (mal)function. In theory, they could heal society at scale as well, but even if we imagine there were no technical limitations preventing that, sadly practice is more likely to show selfish interests prevail and be amplified.
Particularly given some documented instances where a user has asked the language model about similar warnings, and the model responded by downplaying the warnings, or telling the user to disregard them.
Printing and postage cost was about £5.8 million. At the time, I thought it was a waste of taxpayers’ money. A letter wouldn’t change anyone’s behaviour, least of all not mine or anyone I knew.
But the economics told a different story. The average cost of treating a Covid patient in intensive care ran to tens of thousands of pounds. The UK Treasury values preventing a single death at around £2 million (their official Value of a Prevented Fatality). That means if the letter nudged just three people into behaviour that prevented their deaths, it would have paid for itself. Three deaths, out of 30 million households.
In reality, the effect was likely far larger. If only 0.1% of households (30,000 families) changed their behaviour even slightly, whether through better handwashing, reduced contact, or staying home when symptomatic, those small actions would multiply during an exponential outbreak. The result could easily be hundreds of lives saved and millions in healthcare costs avoided.
Seen in that light, £5.8 million wasn’t wasteful at all. It was one of the smarteer investments of the pandemic.
What I dismissed as wasteful and pointless, turned out to be a great example of how what appears to be a large upfront cost can deliver returns that massively outweigh the initial outlay.
I changed my view and admitted I was wrong.
1) The man became severely mentally ill in middle age, and he lived with his mother because he couldn't take care of himself. Describing him as merely "isolated" makes me wonder if you read the article: meeting new friends was not going to help him very much because he was not capable of maintaining those friendships.
2) Saying people turn to chatbots because of isolation is like saying they turn to drugs because of depression. In many cases that's how it started. But people get addicted to chatbots because they are to social interaction what narcotics are to happiness: in the short term you get all of the pleasure without doing any of the work. Human friends insist on give-and-take, chatbots are all give-give-give.
This man didn't talk to chatbots because he was lonely. He did so because he was totally disconnected from reality, and actual human beings don't indulge delusions with endless patience and encouragement the way ChatGPT does. His case is extreme but "people tell me I'm stupid or crazy, ChatGPT says I'm right" is becoming a common theme on social media. It is precisely why LLMs are so addictive and so dangerous.
Not saying I want AIs to be banned or that the article is good, I'm just argumenting that your analogy could potentially be flawed.
At some point you have to just live with marginal dangers. There is no technical solution here.
It looks like fairly standard incomprehensible psychosis messages but it seems notable to me that ChatGPT responds as if they are normal (profound, even) messages.
The 'In Search of AI Psychosis' article and discussions on HN [2] from a few days ago are very relevant here too.
[0] https://www.instagram.com/eriktheviking1987
Just to be safe, we better start attaching these warnings to every social media client. Can't be too careful
This but completely unironically.
(We have family that would be homeless if we hadn’t taken them under our wings to house them).
I used AI to find the location of an item in a large supermarket today. It guessed and was wrong, but the first human I saw inside knew the exact location and quantity remaining.
Why am I wasting my time? That should be a nagging question whenever we're online.