"AI responses may include mistakes. Learn more"
I did inspect element and it's actually 12px (or 9pt). For context the rest of the text (non-header) is 18px. That seems fine to me? It's small to be unobtrusive, but not exactly invisible either.
Especially in an area you own, like your own website or property.
Want to dump toxic waste in your backyard? Just put up a sign so your neighbors know, then if they stick around it's on them, really, no right to complain.
Want to brake-check the person behind you on the highway? Bumper sticker that says "this vehicle may stop unexpectedly". Wow, just like that you're legally off the hook!
Want to hack someone's computer and steal all their files? Just put a disclaimer on the bottom of your website letting them know that by visiting the site they've given you permission to do so.
Stop strawmanning. Just because I support google AI answers with a disclaimer, doesn't mean I think a disclaimer is a carte blanche to do literally anything.
how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?
not a quote from someone else, just completely made up based on nothing other than word salad
would you honestly think "oh that's fine, because there's a size 8 text at the bottom saying it may be incorrect"
I very much doubt it
>how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?
Suppose AI wasn't in the picture, and google was only returning a snippet of the top result, which was a slanderous site saying that you're a registered sex offender. Should google still be held liable? If so, should they be held liable immediately, or only after a chance to issue a correction?
Considering the extent to which people have very strong opinions about "their" side in the conflict, to the point of committing violent acts especially when feeling betrayed, I don't think spreading this particular piece of disinformation is any less potentially dangerous than the things I listed.
I do understand it is a complicated matter, but looks like Google just want to be there, no matter what, in the GenAI race. How much will it take for those snippets to be sponsored content? They are marketing them as the first thing a Google user should read.
In your hypothetical, Google is only copying a snippet from a website. They're only responsible for amplifying the reach of that snippet.
In the OP case, Google are editorializing, which means it is clearly Google's own speech doing the libel.
The source of the defamatory text is Google’s own tool, therefore it is Google’s fault, and therefore they should be held liable immediately.
In the latter case I'm fine with "yes" and "immediately". When you build a system that purports to give answers to real world questions, then you're responsible for the answers given.
information is from another website and may not be correct.
You wouldn't punish the person who owns the park if someone inside it breaks the law as long as they were facilitating the law to be obeyed. And Google facilitiates the law by allowing you to take down slanderous material by putting in a request, and further you can go after the original slanderer if you like.
But in this case Google itself is putting out slanderous information it has created itself. So Google in my mind is left holding the buck.
No, there's no "wording" that gets you off the hook. That's the point. It's a question of design and presentation. Would a legal "Reasonable Person" seeing the site know it was another site's info, e.g. literally showing the site in an iframe, or is google presenting it as their own info?
If google is presenting the output of a text generator they wrote, it's easily the latter.
What you said might be true in the early days of google, but google clearly doesn't do exact word matches anymore. There's quite a lot of fuzzy matches going on, which means there's arguably some editorializing going on. This might be relevant if someone was searching for "john smith rapist" and got back results for him sexually harassing someone. It might even be phrased in such a way that makes it sound like he was a rapist, eg. "florida man accused of sexually...". Moreover even before AI results, I've seen enough people say "google says..." in reference to search results that it's questionable to claim that people think non-AI search results aren't by google.
Nice try, but asking a question confirming your opponent's position isn't a strawman.
>No, there's no "wording" that gets you off the hook. That's the point. It's a question of design and presentation. Would a legal "Reasonable Person" seeing the site know it was another site's info, e.g. literally showing the site in an iframe, or is google presenting it as their own info?
So you want the disclaimer to be reworded and moved up top?
It doesn't feel like something where people gradually pick up on it either over the years, it just feels like sarcasm is either redundantly pointed out for those who get it or it is guaranteed to get a literal interpretation response.
Maybe it's because the literal interpretation of sarcasm is almost always so wrong that it inspires people to comment much more. So we just can't get away from this inefficient encoding/communication pattern.
But then again, maybe I'm just often assuming people mean things that sound so wrong to me as sarcasm, so perhaps there are a lot of people out there honestly saying the opposite to what I think they are saying as a joke.
It isn't inherently, but it certainly can be! For example in the way you used it.
If I were to ask, confirming your position, "so you believe the presence of a disclaimer removes all legal responsibility?" then you would in turn accuse me of strawmanning.
Back to the topic at hand, I believe the bar that would need to be met exceeds the definition of "disclaimer", regardless of wording or position. So no.
Wouldn't this basically make any sort of AI as a service untennable? Moreover how would this apply to open weights models? If I asked llama whether someone was a pedophile, and it wrongly answered in the affirmative, can that person sue meta? What if it's run through a third party like Cerebras? Are they on the hook? If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?
If the service was good enough that you'd accept liability for its bad side effects,no?
If it isn't good enough? Good riddance. The company will have to employ a human instead. The billionaires coffers will take the hit, I'm sure.
E: > If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?
Honestly, my analogy would be that an LLm is a tool like a printing press. If a newspaper prints libel, you go after the newspaper, not the person that sold them the printing press.
Same here. It would be on the person using the LLM and disseminating its results, rather than the LLM publisher. The person showing the result of the LLM should have some liability if those results are wrong or cause harm
As a form of argument, this strikes me as pretty fallacious.
Are you claiming that the output of a model built by Google is somehow equivalent to displaying a 3rd party site in a search result?
And yeah, to your point about the literal interpretation of sarcasm being so absurd people want to correct it, I think you’re right. HN is a particularly pedantic corner of the internet, many of us like to be “right” for whatever reason.
instead of the ai saying "gruez is japanese" it should say "hacker news alleges[0] gruez is japanese"
there shouldn't be a separate disclaimer: the LLM should tell true statements rather than imply that the claims are true.
You can't just put up a sticker premeditating your property damage and then it'd a-okay.
No, the sticker is there to deter YOU from suing in small claims court. Because you think you can't. But you can! And their insurance can cover it!
There is also a cultural element. Countries like the UK are used to deadpan where sarcasm is delivered in the same tone as normal, so thinking is required. In Japan the majority of things are taken literally.
> As evidenced by the quote "I think a disclaimer is a carte blanche to do literally anything", the hackernews user <gruez> is clearly of the opinion that it is indeed ok to do whatever you want, as long is there is a sign stating it might happen.
* This text was summarized by the SpaceNugget LLM and may contain errors, and thusly no one can ever be held accountable for any mistakes herein.
But that aside, it is just simply the case that there are a lot of reasons why sarcasm can fail to land. So you just have to decide whether to risk ruining your joke with a tone indicator, or risk your joke failing to land and someone "correcting" you.
you cannot make a safe lawnmower. However lawnmoser makers can't just put a danger label on and get by with something dangerious - they have to put on every guard they can first. Even then they often have to show in court that the mower couldn't work as a mower if they put in a guard to prevent some specific injury and thus they added the warning.
which is to say that so long as they can do something and still work as a search engine they are not allowed to use a disclaimer anyway. The disclaimer is only for when they wouldn't be a search engine.
Apart from that, it is also true that a lot of people here aren't Americans (hello from Australia). I know this is a US-hosted forum, but it is interesting to observe the divide between Americans who speak as if everyone else here is an American (e.g. "half the country") and those who realise many of us aren't
But you're overstating it as a "divide" - I'm in both of your camps. I spoke with a USian context because yes, this site is indeed US-centric. The surveillance industry is primarily a creation of US culture, and is subject to US politics. And as much as I wish this weren't the case (even as a USian), it is, which is why you're in this topic. So I don't see that it's unreasonable for there to be a bit more to unpack coming from a different native context.
But as to your comment applying to my actual point - yes, in addition to "fraying" culture in the middle, we're also expanding it at the edges to include many more people. Although frankly on the topic of sarcasm I feel it's my fellow USians who are really falling short these days.
You'd be surprised how many Australians have never heard of "drop bears". Because it is just an old joke about pranking foreigners, yes many people remember it, but also many have no clue what it is. It is one of those stereotypical Australianisms which tends to occupy more space in many non-Australian minds than in most Australian minds.
> or how "the front fell off".
I'm in my 40s, and I've lived in Australia my whole life, my father was born here, and my mother moved here when she was three years old... and I didn't know what this was, it sounded vaguely familiar but no idea what it meant. Then I look it up and discover it is a reference to an old Clarke and Dawe skit. I know who they are, I used to watch them on TV all the time when I was young (tweens/teens), but I have no memory of ever seeing this skit in particular. Again, likely one of those Australianisms which many non-Australians know, many Australians don't.
Your examples of Australianisms are the stereotypes a non-Australian would mention; we could talk instead about the Australianisms which many Australians use without even realising they are Australianisms: for example, "heaps of" – a recognised idiom in other major English dialects, but in very common use in Australian English, much rarer elsewhere. Or "capsicum", for "bell peppers"–the Latin scientific name everywhere, but the colloquial name only in a few countries–plus botanically the hot ones are capsicum too, but in Australian English (I believe New Zealand English and Indian English too) only the mild ones are "capsicums", the hot ones are "chilis". Or "peak body"–now we are talking bureaucratese not popular parlance–which essentially means the top national activist/lobbyist group for a given subject area, whether that's LGBT people or homelessness or financial advisors.
Thanks for the clarifications. I think my first exposure to drop bears was a few decades ago on a microcontroller mailing list (PIClist). So maybe that poster was just pulling our legs.
I did perceive "front fell off" as an online phenomenon (ie meme). Which speaks to a growing pan-country online culture (I mean, you did get the reference, it's just not part of your Australian identity)
"peak body" is an interesting one, for the concept being acknowledged. I don't think we really explicitly state such a things in the US. I can come up with lobbying groups I think are notable, but perhaps other USians perspectives differ on that notability. Although I'm sure by the time you get to Washington DC and into the political industry there has to be a similar term.