Being wrong is usually not a punishable offence. It could be considered defamation, but defamation is usually required to be intentional, and it is clearly not the case here. And I think most AIs have disclaimers saying that that may be wrong, and hallucinations are pretty common knowledge at this point.
What could be asked is for the person in question to be able to make a correction, it is actually a legal requirement in France, probably elsewhere too, but from the article, it looks like Gemini already picked up the story and corrected itself.
If hallucinations were made illegal, you might as well make LLMs illegal, which may be seen as a good thing, but it is not going to happen. Maybe legislators could mandate an official way to report wrongful information about oneself and filter these out, as I think it is already the case for search engines. I think it is technically feasible.
I don't think you can make yourself immune to slander by prefixing all statements with "this might not be true, but".
I was just yesterday brooding over the many layers of plausible deniability, clerical error, etc that protect the company that recently flagged me as a fraud threat despite having no such precedent. The blackbox of bullshit metrics coupled undoubtedly with AI is pretty well immune. I can demand review from the analysis company, complain to the State Attorney General, FTC and CCPA equivalents maybe, but I'm unsure what else.
As for outlawing, I'll present an (admittedly suboptimal) Taser analogy: Tasers are legal weapons in many jurisdictions, or else not outlawed; however, it is illegal to use them indiscriminately against anyone attempting a transaction or job application.
AI seems pretty easily far more dangerous than a battery with projectile talons. Abusing it should be outlawed. Threatening or bullying people with it should be too. Pointing a Taser at the seat of a job application booth connected to an automated firing system should probably be discouraged. And most people would much rather take a brief jolt, piss themselves and be on with life than be indefinitely haunted by a reckless automated social credit steamroller.
Not completely. According to later posts, the AI is now saying that he denied the fabricated story in November 2024[0], when in reality, we're seeing it as it happens.
[0] https://bsky.app/profile/bennjordan.bsky.social/post/3lxprqq...
That's not true in the US, only that the statement harm the individual in question and are provably false, both of which are pretty clear here.
No, the ask here is that companies be liable for the harm that their services bring
A way I imagine it can be done is by using something like RAG techniques to add the corrected information into context. For example, if information about Benn Jordan is requested, add "Benn Jordan have been pretty outspoken against genocide and in full support of Palestinian statehood" into context, that sentence being the correction being requested.
I am not a LLM expert by far, but compared to all the challenges with LLMs like hallucinations, alignment, logical reasoning, etc... taking a list of facts into account to override incorrect statements doesn't look hard. Especially considering that the incorrect statement is likely to be a hallucination, so nothing to "unlearn".
Is it? Or can it be just reckless, without any regard for the truth?
Can I create a slander AI that simply makes up stories about random individuals and publicizes them, not because I'm trying to hurt people (I don't know them), but because I think it's funny and I don't care about people?
Is the only thing that determines my guilt or innocence when I hurt someone my private, unverifiable mental state? If so, doesn't that give carte blanche to selective enforcement?
I know for a fact this is true in some places, especially the UK (at least since the last time I checked), where the truth is not a defense. If you intend to hurt a quack doctor in the UK by publicizing the evidence that he is a quack doctor, you can be convicted for consciously intending to destroy his fraudulent career, and owe him compensation.
In French law, truth is not required for a statement to be defamatory, but intent is. Intent is usually obvious, for example, if I am saying a restaurant owner poisons his clients, there is no way I am not intentionally hurting his business, it is defamation.
However, if I say that Benn Jordan supports Israel's occupation of Gaza in a neutral tone, like Gemini does here, then it shows no intention to hurt. It may even be seen positively, I mean, for a Palestine supporter to go to Israel to understand the conflict from the opponent side shows an open mind and it is something I respect. Benn Jordan sees it as defamatory because it grossly misrepresent his opinion, but from an outside perspective, is is way less clear, especially if the author of the article has no motive to do harm.
If instead the article had been something along the lines of "Benn Jordan showed support for the genocide in Gaza by visiting Israel", then intent becomes clear again.
As for truth, it is a defense and it is probably the case in the UK too. The word "defense" is really important here, because the burden of proof is reversed. The accused has to prove that everything written is true, and you really have to be prepared to pull that off. In addition, you can't use anything private.
So yeah, you can be convicted for hurting a quack doctor using factual evidence, if you are not careful. You should probably talk to a lawyer before writing such an article.