Most active commenters
  • gruez(5)
  • simmerup(4)
  • margalabargala(3)

←back to thread

693 points jsheard | 21 comments | | HN request time: 1.479s | source | bottom
Show context
meindnoch ◴[] No.45093248[source]
It's not Google's fault. The 6pt text at the bottom clearly says:

"AI responses may include mistakes. Learn more"

replies(2): >>45093295 #>>45093476 #
blibble ◴[] No.45093476[source]
it IS google's fault, because they have created and are directly publishing defamatory content

how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?

not a quote from someone else, just completely made up based on nothing other than word salad

would you honestly think "oh that's fine, because there's a size 8 text at the bottom saying it may be incorrect"

I very much doubt it

replies(2): >>45093503 #>>45093531 #
1. gruez ◴[] No.45093503[source]
>it IS google's fault, because they have created and are directly publishing defamatory content

>how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?

Suppose AI wasn't in the picture, and google was only returning a snippet of the top result, which was a slanderous site saying that you're a registered sex offender. Should google still be held liable? If so, should they be held liable immediately, or only after a chance to issue a correction?

replies(6): >>45093589 #>>45093604 #>>45093625 #>>45093638 #>>45093810 #>>45094110 #
2. simmerup ◴[] No.45093589[source]
No, but the person Googles linking to should be held liable.
replies(2): >>45093621 #>>45093701 #
3. atq2119 ◴[] No.45093604[source]
Yes, they should also be held liable, but clearly the case of AI is worse.

In your hypothetical, Google is only copying a snippet from a website. They're only responsible for amplifying the reach of that snippet.

In the OP case, Google are editorializing, which means it is clearly Google's own speech doing the libel.

4. gruez ◴[] No.45093621[source]
Why does google get off the hook in that case? I'd still be quite upset if it wasn't in the AI box, and even before the AI box there's plenty of people who take the snippets at face value.
replies(1): >>45093694 #
5. summermusic ◴[] No.45093625[source]
That hypothetical scenario does not matter, it is a distraction from the real issue which is that Google’s tool produces defamatory text that is unsubstantiated by any material online.

The source of the defamatory text is Google’s own tool, therefore it is Google’s fault, and therefore they should be held liable immediately.

6. margalabargala ◴[] No.45093638[source]
That would depend on whether the snippet was presented as "this is a view of the other website" vs "this is some information"

In the latter case I'm fine with "yes" and "immediately". When you build a system that purports to give answers to real world questions, then you're responsible for the answers given.

information is from another website and may not be correct.

replies(1): >>45093659 #
7. gruez ◴[] No.45093659[source]
>That would depend on whether the snippet was presented as "this is a view of the other website" vs "this is some information"

So all google had to do was reword their disclaimer differently?

replies(1): >>45093739 #
8. simmerup ◴[] No.45093694{3}[source]
In my mind the Google result page is like a public space.

You wouldn't punish the person who owns the park if someone inside it breaks the law as long as they were facilitating the law to be obeyed. And Google facilitiates the law by allowing you to take down slanderous material by putting in a request, and further you can go after the original slanderer if you like.

But in this case Google itself is putting out slanderous information it has created itself. So Google in my mind is left holding the buck.

replies(1): >>45093995 #
9. anonymars ◴[] No.45093701[source]
Isn't the whole point that there was no source being linked to, because the AI made it up?
replies(1): >>45093719 #
10. simmerup ◴[] No.45093719{3}[source]
Sorry, I was repsonding to this: > Suppose AI wasn't in the picture, and google was only returning a snippet of the top result
replies(1): >>45094002 #
11. margalabargala ◴[] No.45093739{3}[source]
Stop strawmanning.

No, there's no "wording" that gets you off the hook. That's the point. It's a question of design and presentation. Would a legal "Reasonable Person" seeing the site know it was another site's info, e.g. literally showing the site in an iframe, or is google presenting it as their own info?

If google is presenting the output of a text generator they wrote, it's easily the latter.

replies(2): >>45093841 #>>45094203 #
12. aDyslecticCrow ◴[] No.45093810[source]
The article author could be sued. Gemeni cannot be.
13. gruez ◴[] No.45093841{4}[source]
>Stop strawmanning.

Nice try, but asking a question confirming your opponent's position isn't a strawman.

>No, there's no "wording" that gets you off the hook. That's the point. It's a question of design and presentation. Would a legal "Reasonable Person" seeing the site know it was another site's info, e.g. literally showing the site in an iframe, or is google presenting it as their own info?

So you want the disclaimer to be reworded and moved up top?

replies(3): >>45093912 #>>45094242 #>>45095620 #
14. margalabargala ◴[] No.45093912{5}[source]
> Nice try, but asking a question confirming your opponent's position isn't a strawman.

It isn't inherently, but it certainly can be! For example in the way you used it.

If I were to ask, confirming your position, "so you believe the presence of a disclaimer removes all legal responsibility?" then you would in turn accuse me of strawmanning.

Back to the topic at hand, I believe the bar that would need to be met exceeds the definition of "disclaimer", regardless of wording or position. So no.

15. gruez ◴[] No.45093995{4}[source]
>But in this case Google itself is putting out slanderous information it has created itself. So Google in my mind is left holding the buck.

Wouldn't this basically make any sort of AI as a service untennable? Moreover how would this apply to open weights models? If I asked llama whether someone was a pedophile, and it wrongly answered in the affirmative, can that person sue meta? What if it's run through a third party like Cerebras? Are they on the hook? If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?

replies(1): >>45094025 #
16. anonymars ◴[] No.45094002{4}[source]
Got it, and agreed it's a very different scenario
17. simmerup ◴[] No.45094025{5}[source]
> Wouldn't this basically make any sort of AI as a service untennable

If the service was good enough that you'd accept liability for its bad side effects,no?

If it isn't good enough? Good riddance. The company will have to employ a human instead. The billionaires coffers will take the hit, I'm sure.

E: > If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?

Honestly, my analogy would be that an LLm is a tool like a printing press. If a newspaper prints libel, you go after the newspaper, not the person that sold them the printing press.

Same here. It would be on the person using the LLM and disseminating its results, rather than the LLM publisher. The person showing the result of the LLM should have some liability if those results are wrong or cause harm

18. haswell ◴[] No.45094110[source]
Why would we suppose AI isn’t in the picture? You’re describing unrelated scenarios. Apples and oranges. You can’t wish away the AI and then conclude what’s happening is acceptable because of how something entirely unrelated has been treated in the past.

As a form of argument, this strikes me as pretty fallacious.

Are you claiming that the output of a model built by Google is somehow equivalent to displaying a 3rd party site in a search result?

19. rectang ◴[] No.45094203{4}[source]
Exactly. This is the consequence when search engines cut out all the sites they used to send traffic to and instead present AI summaries as their own seemingly-authoritative content in order to keep the user from leaving. If you provide material in a way that your users trust, then you have to back it up. The alternative is to make sure that your users don’t trust it (and thus are disinclined to use it).
20. 8note ◴[] No.45094242{5}[source]
the snippet should be written differently.

instead of the ai saying "gruez is japanese" it should say "hacker news alleges[0] gruez is japanese"

there shouldn't be a separate disclaimer: the LLM should tell true statements rather than imply that the claims are true.

21. bluGill ◴[] No.45095620{5}[source]
No disclaimer is allowed. They can link to the misleading/wrong site only in the case that it is obviously a link.

you cannot make a safe lawnmower. However lawnmoser makers can't just put a danger label on and get by with something dangerious - they have to put on every guard they can first. Even then they often have to show in court that the mower couldn't work as a mower if they put in a guard to prevent some specific injury and thus they added the warning.

which is to say that so long as they can do something and still work as a search engine they are not allowed to use a disclaimer anyway. The disclaimer is only for when they wouldn't be a search engine.