←back to thread

693 points jsheard | 2 comments | | HN request time: 0s | source
Show context
meindnoch ◴[] No.45093248[source]
It's not Google's fault. The 6pt text at the bottom clearly says:

"AI responses may include mistakes. Learn more"

replies(2): >>45093295 #>>45093476 #
blibble ◴[] No.45093476[source]
it IS google's fault, because they have created and are directly publishing defamatory content

how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?

not a quote from someone else, just completely made up based on nothing other than word salad

would you honestly think "oh that's fine, because there's a size 8 text at the bottom saying it may be incorrect"

I very much doubt it

replies(2): >>45093503 #>>45093531 #
gruez ◴[] No.45093503[source]
>it IS google's fault, because they have created and are directly publishing defamatory content

>how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?

Suppose AI wasn't in the picture, and google was only returning a snippet of the top result, which was a slanderous site saying that you're a registered sex offender. Should google still be held liable? If so, should they be held liable immediately, or only after a chance to issue a correction?

replies(6): >>45093589 #>>45093604 #>>45093625 #>>45093638 #>>45093810 #>>45094110 #
simmerup ◴[] No.45093589[source]
No, but the person Googles linking to should be held liable.
replies(2): >>45093621 #>>45093701 #
gruez ◴[] No.45093621[source]
Why does google get off the hook in that case? I'd still be quite upset if it wasn't in the AI box, and even before the AI box there's plenty of people who take the snippets at face value.
replies(1): >>45093694 #
simmerup ◴[] No.45093694{3}[source]
In my mind the Google result page is like a public space.

You wouldn't punish the person who owns the park if someone inside it breaks the law as long as they were facilitating the law to be obeyed. And Google facilitiates the law by allowing you to take down slanderous material by putting in a request, and further you can go after the original slanderer if you like.

But in this case Google itself is putting out slanderous information it has created itself. So Google in my mind is left holding the buck.

replies(1): >>45093995 #
1. gruez ◴[] No.45093995{4}[source]
>But in this case Google itself is putting out slanderous information it has created itself. So Google in my mind is left holding the buck.

Wouldn't this basically make any sort of AI as a service untennable? Moreover how would this apply to open weights models? If I asked llama whether someone was a pedophile, and it wrongly answered in the affirmative, can that person sue meta? What if it's run through a third party like Cerebras? Are they on the hook? If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?

replies(1): >>45094025 #
2. simmerup ◴[] No.45094025[source]
> Wouldn't this basically make any sort of AI as a service untennable

If the service was good enough that you'd accept liability for its bad side effects,no?

If it isn't good enough? Good riddance. The company will have to employ a human instead. The billionaires coffers will take the hit, I'm sure.

E: > If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?

Honestly, my analogy would be that an LLm is a tool like a printing press. If a newspaper prints libel, you go after the newspaper, not the person that sold them the printing press.

Same here. It would be on the person using the LLM and disseminating its results, rather than the LLM publisher. The person showing the result of the LLM should have some liability if those results are wrong or cause harm