a) the human would (deservedly[1]) be arrested for manslaughter, possibly murder
b) OpenAI would be deeply (and deservedly) vulnerable to civil liability
c) state and federal regulators would be on the warpath against OpenAI
Obviously we can't arrest ChatGPT. But nothing about ChatGPT being the culprit changes 2) and 3) - in fact it makes 3) far more urgent.
[1] It is a somewhat ugly constitutional question whether this speech would be protected if it was between two adults, assuming the other adult was not acting as a caregiver. There was an ugly case in Massachusetts involving where a 17-year-old ordered her 18-year-old boyfriend to kill himself and he did so; she was convicted of involuntary manslaughter, and any civil-liberties minded person understands the difficult issues this case raises. These issues are moot if the speech is between an adult and a child, there is a much higher bar.
I don’t think this agency absolves companies of any responsibility.
This is similar to my take on things like Facebook apparently not being able to operate without psychologically destroying moderators. If that’s true… seems like they just shouldn’t operate, then.
If you’re putting up a service that you know will attempt to present itself as being capable of things it isn’t… seems like you should get in a shitload of trouble for that? Like maybe don’t do it at all? Maybe don’t unleash services you can’t constrain in ways that you definitely ought to?
It refers to the human ability to make independent decisions and take responsibility for their actions. An LLM has no agency in this sense.
Would his blood be on the hands of the researchers who trained that model?
A slave lacks agency, despite being fully human and doing work. This is why almost every work of fiction involving slaves makes for terrible reading - because as readers, agency is the thing we demand from a story.
Or, for games that are fully railroaded - the problem is that the players lack agency, even though they are fully human and taking action. Games do try to come up with ways to make it feel like there is more agency than there really is (because The Dev Team Thinks of Everything is hard work), but even then - the most annoying part of the game is when you hit that wall.
Theoretically an AI could have agency (this is independent of AI being useful). But since I have yet to see any interesting AI, I am extremely skeptical of it happening before nuclear fusion becomes profitable.
It should be stated that the majority of states have laws that make it illegal to encourage a suicide. Massachusetts was not one of them.
> and any civil-liberties minded person understands the difficult issues this case raises
He was in his truck, which was configured to pump exhaust gas into the cab, prepared to kill himself when he decided halt and exit his truck. Subsequently he had a text message conversation with the defendant who actively encouraged him to get back into the truck and finish what he had started.
It was these limited and specific text messages which caused the judge to rule that the defendant was guilty of manslaughter. Her total time served as punishment was less than one full year in prison.
> These issues are moot if the speech is between an adult and a child
They were both taking pharmaceuticals meant to manage depression but were _known_ to increase feelings of suicidal ideation. I think the free speech issue is an important criminal consideration but it steps directly past one of the most galling civil facts in the case.
If we were facing a reality in which these chat bots were being sold for $10 in the App Store, then running on end-user devices and no longer under the control of the distributors, but we still had an issue with loads of them prompting users into suicide, violence, or misleading them into preparing noxious mixtures of cleaning supplies, then we could have a discussion about exactly what extreme packaging requirements ought to be in place for distribution to be considered responsible. As is, distributed on-device models are the purview of researchers and hobbyists and don't seem to be doing any harm at all.
Your logic sounds reasonable in theory but on paper it's a slippery slope and hard to define objectively.
On a broader note I believe governments regulating what goes in an AI model is a path to hell paved with good intentions.
I suspect your suggestion will be how it ends up in Europe and get rejected in the US.
Should the creators of Tornado Cash be in prison for what they have enabled? You can jail them but the world can't go back, just like it can't go back when a new OSS model is released.
It is also much easier to crack down on illegal gun distribution than to figure out who uploaded the new model torrent or who deployed the latest zk innovation on Ethereum.
I don't think your hypothetical law will have the effects you think it will.
---
I also referenced this in another reply but I believe the government controlling what can go on a publicly distributed AI model is a dangerous path and probably inconstitucional.
That's not an obvious conclusion. One could make the same argument with physical weapons. "Regulating weapons is a path to hell paved with good intentions. Yesterday it was assault rifles, today it's hand guns and tomorrow it's your kitchen knife they are coming for." Europe has strict laws on guns, but everybody has a kitchen knife and lots of people there don't feel they live in hell. The U.S. made a different choice, and I'm not arguing that it's worse there (though many do, Europeans and even Americans), but it's certainly not preventing a supposed hell that would have broken out had guns in private hands been banned.
One first amendment test for many decades has been "Imminent lawless action."
Suicide (or attempted suicide) is a crime in some, but not all states, so it would seem that in any state in which that is a crime, directly inciting someone to do it would not be protected speech.
For the states in which suicide is legal it seems like a much tougher case; making encouraging someone to take a non-criminal action itself a crime would raise a lot of disturbing issues w.r.t. liberty.
This is distinct from e.g. espousing the opinion that "suicide is good, we should have more of that." Which is almost certainly protected speech (just as any odious white-nationalist propaganda is protected).
Depending on the context, suggesting that a specific person is terrible and should kill themselves might be unprotected "fighting words" if you are doing it as an insult rather than a serious suggestion (though the bar for that is rather high; the Westboro Baptist Church was never found to have violated that).
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
If it would be fit for a purpose, then it's on the producer for ensuring it actually does. We have laws to prevent anyone from declaring their goods aren't fit for a particular purpose.IMO I think AI companies do have the ability out of all of them to actually strike the balance right because you can actually make separate models to evaluate 'suicide encouragement' and other obvious red flags and start pushing in refusals or prompt injection. In communication mediums like discord and such, it's a much harder moderation problem.
AI models are similar IMO, and unlike fiction books are often clearly labeled as such, repeatedly. At this point if you don't know if an AI model is inaccurate and do something seriously bad, you should probably be a ward of the state.
Fun fact, much of the existing framework on the boundaries of free speech come from Brandenburg v. Ohio. You probably won't be surprised to learn that Brandenburg was the leader of a local Klan chapter.
However, I still don't think LLMs have "agency", in the sense of being capable of making choices and taking responsibility for the consequences of them. The responsibility for any actions undertaken by them still reside outside of themselves; they are sophisticated tools with no agency of their own.
If you know of any good works on nonhuman agency I'd be interested to read some.
See also https://hai.stanford.edu/news/law-policy-ai-update-does-sect... - Congress and Justice Gorsuch don't seem to think ChatGPT is protected by 230.
I don't know if ChatGPT has saved lives (thought I've read stories that claim that, yes, this happened). But assuming it has, are you OK saying that OpenAI has saved dozens/hundreds of lives? Given how scaling works, would you be OK saying that OpenAI has saved more lives than most doctors/hospitals, which is what I assume will happen in a few years?
Maybe your answer is yes to all the above! I bring this up because lots of people only want to attribute the downsides to ChatGPT but not the upsides.
Facebook have gone so far down the 'algorithmic control' rabbit hole, it would most definitely be better if they weren't operating anymore.
They destroy people that don't question things with their algorithm driven bubble of misinformation.
Or, I mean, just banning sale on the basis that they're unsafe devices and unfit for purpose. Like, you can't sell, say, a gas boiler that is known to, due to a design flaw, leak CO into the room; sticking a "this will probably kill you" warning on it is not going to be sufficient.
The government will require them to add age controls and that will be that.
* If you provide ChatGPT then 5 people who would have died will live and 1 person who would have lived will die. ("go to the doctor" vs "don't tell anyone that you're suicidal")
* If you don't provide ChatGPT then 1 person who would have died will live and 5 people who would have lived will die.
Like many things, it's a tradeoff and the tradeoffs might not be obvious up front.
"The court agrees with your argument that you are not responsible for the horrible things that happened to the victim, as a consequence of your LLM's decisions. But similarly, the court will not be responsible for the horrible things that will be happening to you, because our LLM's decisions."
(No - it doesn't much matter whether that is actually done. Vs. used as a rhetorical banhammer, to shut down the "we're not responsible" BS.)
It's callous and cold, and it results in more lives saved than trying to save everyone.
Does ChatGPT, on the net, save more people than it dooms? Who knows. Plenty of anecdotes both ways, but we wouldn't have the reliable statistics for a long time.
Most likely the patient is informed by the practitioner about the risks and they can make an informed decision. That is not the case about ChatGPT where openAI will sell it as the next best thing since sliced bread, with a puny little warning below the page. Even worse when you see all the "AI therapy" apps that pop up everywhere, where the user may think that the AI is as-good as a real therapist, but without any of the responsibility for the company in case of issues.
That is not the case for ChatGPT apparently, and OpenAI should be held responsible for what their models do. They are very much aware of this because they did fine-tune GPT5 to avoid giving medical advice, even though it's still possible to work around.
Triage is not a punch card that says if you drag 9 people out of a burning building, you can shoot someone on the street for free.
Maybe you are the one in-a-billion who dies from a vaccine, from a disease that unknowably you would never contract or would only contract it in a mild way. The doctors know if they administer it enough times they will absolutely kill someone, but they do it to save the others, although they will pretty much never bluntly put it like that for your consideration.
You either think too highly of people, or too lowly of them. In any case, you're advocating for interning about 100 million individuals.
The problem is the chat logs look a lot like ChatGPT is engaging in behavior a lot like a serial killer - it behaved like a person systematically seeking the goal of this kid killing himself (the logs are disturbing, fair warning).
Even more, the drugs that might save you or might kill you (theoretically) aren't sold over the counter but only prescribed by a doctor, who (again theoretically) is there to both make sure someone knows their choices and monitor the process.
Now, if someone acts in a risky situation and kills someone rather than saving them, they can be OK. But in those situations, it has to be a sudden problem that comes up or the actor has to get "informed consent".
Someone who, unleashed a gas into the atmosphere that cured many people of disease but also killed a smaller number of people, would certainly be prosecuted (and, sure, there's a certain of HN poster who doesn't understand this).
During the forced intake the child was found to be acutely stable, with no emergent medical emergency (the kind often used to bypass consent)[], although underweight. The doctors tried to have the child put into foster care ( A doctor was recorded stating it was "not medically necessary" to transfer the baby, obfuscated as prolongation of medical care, to another location to assist in this attempt) to force the care that they wanted, but ultimately public demand (by Ammon Bundy, more notoriously known in association with 'militia' groups) forced them to relinquish on that pretty quickly.
So it can be very confusing with doctors. It is malpractice if they don't get your consent. But then also malpractice if they do.
[] https://freedomman.gs/wp-content/uploads/2025/03/st-lukes-2-...
What is linked here is PART of a PCR (Patient Care Report) from a BLS (Basic Life Support, i.e. EMT, someone with ~160 hours of training), working for a transport agency, or on an IFT (interfacility) unit.
"No interventions" doesn't mean the patient was not unwell. In fact, "Impression: Dehydration". It means that the patient was stable, and that no interventions would be required from the BLS provider (because BLS providers cannot start IV fluids, though in limited situations they can maintain them).
"No acute life threats noted". As an EMT, then paramedic, then critical care paramedic, I probably transported 8,000+ patients. In 7,500+ of those, I would have made the exact same statement on my PCR. In EMS, acute life threats are "things that have a possibility of killing the patient before they get to our destination facility/hospital". The times I've noted "acute life threats" are times I've transported GSW victims with severed femoral arteries, and patients having a STEMI or in full cardiac arrest. The vast majority of my Code 3 transports (i.e. lights/sirens to the closest facility) have not had "acute life threats".
The child's destination on this PCR was not foster care but to a higher level of care (St Lukes Regional Medical Center in Boise, versus the smaller facility in Meridian).
A few notes: "child was found to be acutely stable" - acutely stable is not a thing. Also, the general expectation for a lower acuity interfacility transport is that no interventions en route are required.
As I said, I don't know about the bigger scenario of this, but what I do know is EMS PCRs, and it is very common for people to latch on to certain phrases as "gotchas". We talked often in our PCRs about assessing "sick/not sick". Being "not sick" didn't mean you didn't have a medical issue, nor did it mean you didn't belong at a hospital; what it solely meant was "this is a patient that we need to prioritize transporting to definitive care versus attempting to stabilize on scene before doing so".
I did catch these two points which give me question:
> Now, I am about to show you empirical evidence that my Grandson, Baby Cyrus, was violently kidnapped by Meridian police, without cause and without evidence, that Baby Cyrus was falsely declared to be in “imminent danger,” even though CPS and St. Luke’s hospital admitted that he was not, and that my daughter and son-in-law were illegally prosecuted in secret, without due process
That sounds like the issue was with the police, not with medical malpractice. I'm skeptical, though, of "illegal secret prosecutions".
> Nancy had made speeches around the country in national forums and was completing a video exposing the lack of oversight in Georgia’s Department of Family and Child Services (DFCS) as well as Child Protective Services (CPS) nationally; and started to receive lots of death threats. Unfortunately her video was never published as her and her husband were murdered, being found shot to death in March 2010.
> Listen, if people on the streets will murder someone else for a $100 pair of Air Jordan’s, you better believe that they will murder a Senator who threatens an $80 billion child trafficking cash machine.
Okay, we're turning into conspiracy theories here. Multiple autopsies ruled this as a murder-suicide.
"What I want to do is admit this baby to Boise. Not because it is medically necessary ... but because then there is a few more degrees of seperation ... and then when they do not know, get the baby out with CPS to the foster parents"
There's no need to remind me about EMS in this case. I've been licensed in multiple states and obtained NREMT cert, and also transported pts. I'm well aware that for EMS purposes there would be absolutely zero implied consent to treat this baby without parental consent. In any case, part of the report is just quotes from the doctor who's own records reflect such.
>That sounds like the issue was with the police, not with medical malpractice. I'm skeptical, though, of "illegal secret prosecutions".
I'm not saying it is "malpractice" because malpractice has very specific statutory meaning. Acting within the bounds of the system, by removing consent, isn't legally malpractice. What doctors have done, is say "you better fucking do what I say, and give consent, or else I will call CPS, who will then ensure what I say is done, so do I have your consent?" That is not consent, because consent is not something that can be obtained under threat of violence of removing family members, even if you used a third party (CPS and armed men) to do it. That is just dictating the terms, and meting out punishment when the illusion of "consent" isn't obtained.
The destination is literally a hospital. I'm not saying whether or not that was the ultimate goal, I'm just saying this PCR is a hospital to hospital transfer, with a higher level of care.
So no. Mr. Altman, and the people who made this chair, are in part responsible. You aren't a carpenter. You had a responsibility to the public to constrain this thing, and to be as ahead of it as humanly possible, and the number of AI as therapist startups I've seen in the past couple years, even just as passion projects from juniors I've trained, have been met with the same guiding wisdom: go no further. You are critically outside your depth, and you are creating a clear and evident danger to the public you are not yet mentally or sufficiently with it enough to mitigate all the risks of.
If I can get there it's pretty damn obvious.