Most active commenters
  • GuB-42(3)

←back to thread

693 points jsheard | 30 comments | | HN request time: 1.289s | source | bottom
1. deepvibrations ◴[] No.45093169[source]
The law needs to stand up and make an example here, otherwise this will just continue and at some point a real disaster will occur due to AI.
replies(2): >>45093230 #>>45094131 #
2. koolba ◴[] No.45093230[source]
> The law needs to stand up and make an example here, otherwise this will just continue and at some point a real disaster will occur due to AI.

What does it mean to “make and example”?

I’m for cleaning up AI slop as much as the next natural born meat bag, but I also detest a litigious society. The types of legal action that stops this in the future would immediately be weaponized.

replies(4): >>45093255 #>>45093318 #>>45093445 #>>45093764 #
3. recursive ◴[] No.45093255[source]
Weapons against misinformation are good weapons. Bring on the weaponization.
replies(2): >>45093307 #>>45093321 #
4. gruez ◴[] No.45093307{3}[source]
>Weapons against misinformation are good weapons

It's all fun and games until the political winds sway the other way, and the other side are attacking your side for "misinformation".

replies(2): >>45097470 #>>45102007 #
5. Cthulhu_ ◴[] No.45093318[source]
> The types of legal action that stops this in the future would immediately be weaponized.

As it should; this is misinformation and/or slander. The disclaimer is not good enough. A few years ago, Google and most of the social media was united in fact checking and fighting "fake news". Now they push AI generated information that use authoritative language at the very top of e.g. search results.

The disclaimer is moot if people consider AI to be authoritative anyway.

6. Newlaptop ◴[] No.45093321{3}[source]
The weapons will be used by the people in power.

Do you want your country's current political leaders to have more weapons to suppress information they dislike or facts they disagree with? If yes, will you also be happy if your country's opposition leaders gain that power in a few years?

replies(3): >>45093509 #>>45093565 #>>45093728 #
7. poulpy123 ◴[] No.45093445[source]
I don't like a litigious society, and I don't know if the case here would be enough to activate my threshold, but companies are responsible for the AI they provide, and should not be able to hide behind "the algorithm" when there are issues
8. gregates ◴[] No.45093509{4}[source]
There are already laws against libel and slander. And yes, people like Trump and Musk routinely try to abuse them. They are often unsuccessful. The existence of the laws does not seem to be the relevant factor in whether these attempts to abuse the system succeed.
9. delusional ◴[] No.45093565{4}[source]
Two counterpoints:

What we're talking about here are legal democratic weapons. The only thing stopping us from using these weapons right now is democratic governance. "The bad people", being unconcerned with democracy, can already use these weapons right now. Trumps unilateral application of tariffs wasn't predestined by some advancement of governmental power by the democrats. He just did it. We don't even know if it was even legal.

Secondly, the people in power are who are spreading this misinformation we are looking at. Information is getting suppressed by the powerful. Namely Google.

Placing limits on democracy in the name of "stopping the bad guys" will usually just curtail the good guys from doing good things, and bad guys doing the bad thing anyway.

10. tehwebguy ◴[] No.45093728{4}[source]
They already do and they don’t even have to be powerful.

A conspiracy guy who ran a disqualified campaign for a TN rep seat sued Facebook for defamation for a hallucination saying he took part in the J6 riots. They settled the suit and hired him as an anti-DEI advisor.

(I don’t have proof that hiring him was part of the undisclosed settlement terms but since I’m not braindead I believe it was.)

replies(1): >>45101326 #
11. aDyslecticCrow ◴[] No.45093764[source]
If a humam published an article claiming this exact same thing as gemeni, the author could be sued and have a pretty good case.

But when gemeni does it its a "mistake by the algorithm". AI is a used as responsibility diversion machine.

This is a rather harmless example. But what about dangerous medical advice? What about openly false advertising? What about tax evasion? If an AI does it is it okay because nobody is responsibile?

If applying a proper chain of liability on ai output makes some uses of AI impossible; so be it.

replies(2): >>45094039 #>>45094248 #
12. GuB-42 ◴[] No.45094131[source]
On what grounds?

Being wrong is usually not a punishable offence. It could be considered defamation, but defamation is usually required to be intentional, and it is clearly not the case here. And I think most AIs have disclaimers saying that that may be wrong, and hallucinations are pretty common knowledge at this point.

What could be asked is for the person in question to be able to make a correction, it is actually a legal requirement in France, probably elsewhere too, but from the article, it looks like Gemini already picked up the story and corrected itself.

If hallucinations were made illegal, you might as well make LLMs illegal, which may be seen as a good thing, but it is not going to happen. Maybe legislators could mandate an official way to report wrongful information about oneself and filter these out, as I think it is already the case for search engines. I think it is technically feasible.

replies(7): >>45094409 #>>45094520 #>>45094672 #>>45094811 #>>45094849 #>>45094863 #>>45096741 #
13. throwawaymaths ◴[] No.45094248{3}[source]
> If a humam published an article claiming this exact same thing as gemeni, the author could be sued and have a pretty good case.

Actually, no. If you published an article where you accidentally copypasta'd text from the wrong email (for example) on a busy day and wound up doing the same thing, it would be an honest mistake, you would be expected to put up a correction and move on with your life as a journalist.

replies(1): >>45103477 #
14. Retr0id ◴[] No.45094409[source]
Google's disclaimers clearly aren't cutting it, and "correcting" it isn't really possible if it's a dynamic response to each query.

I don't think you can make yourself immune to slander by prefixing all statements with "this might not be true, but".

replies(1): >>45094983 #
15. eth0up ◴[] No.45094520[source]
"if hallucinations were made illegal..."

I was just yesterday brooding over the many layers of plausible deniability, clerical error, etc that protect the company that recently flagged me as a fraud threat despite having no such precedent. The blackbox of bullshit metrics coupled undoubtedly with AI is pretty well immune. I can demand review from the analysis company, complain to the State Attorney General, FTC and CCPA equivalents maybe, but I'm unsure what else.

As for outlawing, I'll present an (admittedly suboptimal) Taser analogy: Tasers are legal weapons in many jurisdictions, or else not outlawed; however, it is illegal to use them indiscriminately against anyone attempting a transaction or job application.

AI seems pretty easily far more dangerous than a battery with projectile talons. Abusing it should be outlawed. Threatening or bullying people with it should be too. Pointing a Taser at the seat of a job application booth connected to an automated firing system should probably be discouraged. And most people would much rather take a brief jolt, piss themselves and be on with life than be indefinitely haunted by a reckless automated social credit steamroller.

16. Sophira ◴[] No.45094672[source]
> it looks like Gemini already picked up the story and corrected itself.

Not completely. According to later posts, the AI is now saying that he denied the fabricated story in November 2024[0], when in reality, we're seeing it as it happens.

[0] https://bsky.app/profile/bennjordan.bsky.social/post/3lxprqq...

17. delecti ◴[] No.45094811[source]
Defamation does not have to be intentional, it can also be a statement made with reckless disregard for whether it's true or not. That's a pretty solid description of LLM hallucinations.
18. jedimastert ◴[] No.45094849[source]
> It could be considered defamation, but defamation is usually required to be intentional

That's not true in the US, only that the statement harm the individual in question and are provably false, both of which are pretty clear here.

19. jedimastert ◴[] No.45094863[source]
> If hallucinations were made illegal, you might as well make LLMs illegal

No, the ask here is that companies be liable for the harm that their services bring

20. GuB-42 ◴[] No.45094983{3}[source]
Correction doesn't seem like an impossible task to me.

A way I imagine it can be done is by using something like RAG techniques to add the corrected information into context. For example, if information about Benn Jordan is requested, add "Benn Jordan have been pretty outspoken against genocide and in full support of Palestinian statehood" into context, that sentence being the correction being requested.

I am not a LLM expert by far, but compared to all the challenges with LLMs like hallucinations, alignment, logical reasoning, etc... taking a list of facts into account to override incorrect statements doesn't look hard. Especially considering that the incorrect statement is likely to be a hallucination, so nothing to "unlearn".

replies(1): >>45095856 #
21. larodi ◴[] No.45095856{4}[source]
Of course it must be RAG of some sort, this is a super low-lying fruit to grab onto. But then it is perhaps not so easy, and it is not a silver bullet to kill off competition such as Perplexity, which, honestly, handles this whole summary-search business much better.
22. pessimizer ◴[] No.45096741[source]
> defamation is usually required to be intentional

Is it? Or can it be just reckless, without any regard for the truth?

Can I create a slander AI that simply makes up stories about random individuals and publicizes them, not because I'm trying to hurt people (I don't know them), but because I think it's funny and I don't care about people?

Is the only thing that determines my guilt or innocence when I hurt someone my private, unverifiable mental state? If so, doesn't that give carte blanche to selective enforcement?

I know for a fact this is true in some places, especially the UK (at least since the last time I checked), where the truth is not a defense. If you intend to hurt a quack doctor in the UK by publicizing the evidence that he is a quack doctor, you can be convicted for consciously intending to destroy his fraudulent career, and owe him compensation.

replies(1): >>45097558 #
23. recursive ◴[] No.45097470{4}[source]
I'm having a hard time thinking of any winds where I would want it to be acceptable to publish false statements about a person. It doesn't seem there's even any dispute about whether the statements are false. These things can be complicated, but this is not complicated. I'm not feeling a need to rush to the defense of Google for making false statements about some guy.
24. GuB-42 ◴[] No.45097558{3}[source]
I think it is the same in France as it in in the UK.

In French law, truth is not required for a statement to be defamatory, but intent is. Intent is usually obvious, for example, if I am saying a restaurant owner poisons his clients, there is no way I am not intentionally hurting his business, it is defamation.

However, if I say that Benn Jordan supports Israel's occupation of Gaza in a neutral tone, like Gemini does here, then it shows no intention to hurt. It may even be seen positively, I mean, for a Palestine supporter to go to Israel to understand the conflict from the opponent side shows an open mind and it is something I respect. Benn Jordan sees it as defamatory because it grossly misrepresent his opinion, but from an outside perspective, is is way less clear, especially if the author of the article has no motive to do harm.

If instead the article had been something along the lines of "Benn Jordan showed support for the genocide in Gaza by visiting Israel", then intent becomes clear again.

As for truth, it is a defense and it is probably the case in the UK too. The word "defense" is really important here, because the burden of proof is reversed. The accused has to prove that everything written is true, and you really have to be prepared to pull that off. In addition, you can't use anything private.

So yeah, you can be convicted for hurting a quack doctor using factual evidence, if you are not careful. You should probably talk to a lawyer before writing such an article.

25. aspenmayer ◴[] No.45101326{5}[source]
> A conspiracy guy who ran a disqualified campaign for a TN rep seat sued Facebook for defamation for a hallucination saying he took part in the J6 riots. They settled the suit and hired him as an anti-DEI advisor.

https://en.wikipedia.org/wiki/Robby_Starbuck#Lawsuit_against...

> (I don’t have proof that hiring him was part of the undisclosed settlement terms but since I’m not braindead I believe it was.)

It seems to be public information that this was a condition of the settlement, so no speculation necessary:

https://www.theverge.com/news/757537/meta-robby-starbuck-con... | https://archive.is/uihsi

https://www.wsj.com/tech/ai/meta-robby-starbuck-ai-lawsuit-s... | https://archive.is/0VKrL

replies(1): >>45102752 #
26. ◴[] No.45102007{4}[source]
27. tehwebguy ◴[] No.45102752{6}[source]
Yeah but the articles don’t directly attribute that specific part to any source and I’m not trying to get sued next.
replies(1): >>45105244 #
28. johnecheck ◴[] No.45103477{4}[source]
Actually, yes. In the US, the 'actual malice' standard only applies to 'public figures'. Outside of that, damaging a person's reputation with false statements is defamation regardless of whether it was due to negligence or malice.
replies(1): >>45105373 #
29. aspenmayer ◴[] No.45105244{7}[source]
Sued for quoting an article? Get real. No judge would approve those charges.
30. ◴[] No.45105373{5}[source]