"AI responses may include mistakes. Learn more"
"AI responses may include mistakes. Learn more"
how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?
not a quote from someone else, just completely made up based on nothing other than word salad
would you honestly think "oh that's fine, because there's a size 8 text at the bottom saying it may be incorrect"
I very much doubt it
>how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?
Suppose AI wasn't in the picture, and google was only returning a snippet of the top result, which was a slanderous site saying that you're a registered sex offender. Should google still be held liable? If so, should they be held liable immediately, or only after a chance to issue a correction?
In your hypothetical, Google is only copying a snippet from a website. They're only responsible for amplifying the reach of that snippet.
In the OP case, Google are editorializing, which means it is clearly Google's own speech doing the libel.
The source of the defamatory text is Google’s own tool, therefore it is Google’s fault, and therefore they should be held liable immediately.
In the latter case I'm fine with "yes" and "immediately". When you build a system that purports to give answers to real world questions, then you're responsible for the answers given.
information is from another website and may not be correct.
You wouldn't punish the person who owns the park if someone inside it breaks the law as long as they were facilitating the law to be obeyed. And Google facilitiates the law by allowing you to take down slanderous material by putting in a request, and further you can go after the original slanderer if you like.
But in this case Google itself is putting out slanderous information it has created itself. So Google in my mind is left holding the buck.
No, there's no "wording" that gets you off the hook. That's the point. It's a question of design and presentation. Would a legal "Reasonable Person" seeing the site know it was another site's info, e.g. literally showing the site in an iframe, or is google presenting it as their own info?
If google is presenting the output of a text generator they wrote, it's easily the latter.
Nice try, but asking a question confirming your opponent's position isn't a strawman.
>No, there's no "wording" that gets you off the hook. That's the point. It's a question of design and presentation. Would a legal "Reasonable Person" seeing the site know it was another site's info, e.g. literally showing the site in an iframe, or is google presenting it as their own info?
So you want the disclaimer to be reworded and moved up top?
It isn't inherently, but it certainly can be! For example in the way you used it.
If I were to ask, confirming your position, "so you believe the presence of a disclaimer removes all legal responsibility?" then you would in turn accuse me of strawmanning.
Back to the topic at hand, I believe the bar that would need to be met exceeds the definition of "disclaimer", regardless of wording or position. So no.
Wouldn't this basically make any sort of AI as a service untennable? Moreover how would this apply to open weights models? If I asked llama whether someone was a pedophile, and it wrongly answered in the affirmative, can that person sue meta? What if it's run through a third party like Cerebras? Are they on the hook? If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?
If the service was good enough that you'd accept liability for its bad side effects,no?
If it isn't good enough? Good riddance. The company will have to employ a human instead. The billionaires coffers will take the hit, I'm sure.
E: > If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?
Honestly, my analogy would be that an LLm is a tool like a printing press. If a newspaper prints libel, you go after the newspaper, not the person that sold them the printing press.
Same here. It would be on the person using the LLM and disseminating its results, rather than the LLM publisher. The person showing the result of the LLM should have some liability if those results are wrong or cause harm
As a form of argument, this strikes me as pretty fallacious.
Are you claiming that the output of a model built by Google is somehow equivalent to displaying a 3rd party site in a search result?
instead of the ai saying "gruez is japanese" it should say "hacker news alleges[0] gruez is japanese"
there shouldn't be a separate disclaimer: the LLM should tell true statements rather than imply that the claims are true.
you cannot make a safe lawnmower. However lawnmoser makers can't just put a danger label on and get by with something dangerious - they have to put on every guard they can first. Even then they often have to show in court that the mower couldn't work as a mower if they put in a guard to prevent some specific injury and thus they added the warning.
which is to say that so long as they can do something and still work as a search engine they are not allowed to use a disclaimer anyway. The disclaimer is only for when they wouldn't be a search engine.