and it is wilful, they know full well it has no concept of truthfulness, yet they serve up its slop output directly into the faces of billions of people
and if this makes "AI" nonviable as a business? tough shit
and it is wilful, they know full well it has no concept of truthfulness, yet they serve up its slop output directly into the faces of billions of people
and if this makes "AI" nonviable as a business? tough shit
Of course, investors are throwing so much money at AI and AI is, in turn, buying legislators and heads of government, who are bound and determined to shield them from liability, so …
We are so screwed.
What does it mean to “make and example”?
I’m for cleaning up AI slop as much as the next natural born meat bag, but I also detest a litigious society. The types of legal action that stops this in the future would immediately be weaponized.
We all knew this would happen but I imagine all hoped anyone finding something shocking there would look further into it.
Of course with the current state of searching and laziness (not being rewarded by dopamine for every informative search vs big dopamine hits if you just make your mind up and continue scrolling the endless feed)
I did inspect element and it's actually 12px (or 9pt). For context the rest of the text (non-header) is 18px. That seems fine to me? It's small to be unobtrusive, but not exactly invisible either.
As it should; this is misinformation and/or slander. The disclaimer is not good enough. A few years ago, Google and most of the social media was united in fact checking and fighting "fake news". Now they push AI generated information that use authoritative language at the very top of e.g. search results.
The disclaimer is moot if people consider AI to be authoritative anyway.
Do you want your country's current political leaders to have more weapons to suppress information they dislike or facts they disagree with? If yes, will you also be happy if your country's opposition leaders gain that power in a few years?
Especially in an area you own, like your own website or property.
Want to dump toxic waste in your backyard? Just put up a sign so your neighbors know, then if they stick around it's on them, really, no right to complain.
Want to brake-check the person behind you on the highway? Bumper sticker that says "this vehicle may stop unexpectedly". Wow, just like that you're legally off the hook!
Want to hack someone's computer and steal all their files? Just put a disclaimer on the bottom of your website letting them know that by visiting the site they've given you permission to do so.
no, ads are their flagship product. Anything else is just a medium for said ads, and therefore fair game for enshittification.
Twenty years ago, we wouldn't have had companies framing the raw output of a text generator as some kind of complete product, especially an all-encompassing general one. How do you know that these probabilistic text generators are performing valid synthesis, as opposed to word salad? You don't. So LLM technology would have used to do things like augment search/retrieval, pointing to concrete sources and excerpts. Or to analyze a problem using math, drive formal models that might miss the mark but at least wouldn't be blatantly incorrect with a convincing narrative. Some actual vision of an opinionated product that wasn't just dumping the output and calling it a day.
Also twenty years ago we also wouldn't have had a company placing a new beta-quality product (at best) front and center as a replacement for their already wildly successful product. But it feels like the real knack of these probabilistic word generators is convincing "product people" of their supreme utility. Of course they're worried - they found something that can bullshit better than themselves.
At any rate all of those discussions about whether humans are be capable of keeping a superintelligent AI "boxed" are laughable in retrospect. We're propping open the doors and chumming other humans' lives as chunks of raw meat, trying to coax it out.
(Definitely starting to feel like an old man here. But I've been yelling at Cloud for years so I guess that tracks)
Stop strawmanning. Just because I support google AI answers with a disclaimer, doesn't mean I think a disclaimer is a carte blanche to do literally anything.
how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?
not a quote from someone else, just completely made up based on nothing other than word salad
would you honestly think "oh that's fine, because there's a size 8 text at the bottom saying it may be incorrect"
I very much doubt it
>how would you feel if someone searched for your name, and Google's first result states that you, unambiguously (by name and city) are a registered sex offender?
Suppose AI wasn't in the picture, and google was only returning a snippet of the top result, which was a slanderous site saying that you're a registered sex offender. Should google still be held liable? If so, should they be held liable immediately, or only after a chance to issue a correction?
Considering the extent to which people have very strong opinions about "their" side in the conflict, to the point of committing violent acts especially when feeling betrayed, I don't think spreading this particular piece of disinformation is any less potentially dangerous than the things I listed.
What we're talking about here are legal democratic weapons. The only thing stopping us from using these weapons right now is democratic governance. "The bad people", being unconcerned with democracy, can already use these weapons right now. Trumps unilateral application of tariffs wasn't predestined by some advancement of governmental power by the democrats. He just did it. We don't even know if it was even legal.
Secondly, the people in power are who are spreading this misinformation we are looking at. Information is getting suppressed by the powerful. Namely Google.
Placing limits on democracy in the name of "stopping the bad guys" will usually just curtail the good guys from doing good things, and bad guys doing the bad thing anyway.
But then the product manager wouldn't get a promotion. They don't seem to care about providing a good service anymore.
> probably dumber than the other ones for cost savings
It's amusing how anyone at Google thinks offering a subpar and error-prone AI search result would not affect their reputation worse than it already is.
It's making stuff up, giving bad or fatal advice, promoting false political narratives, stealing content and link juice from actual content creators. They're abusing their anti-competitively dominant position, and just burning good will like it's gonna last forever. Maybe they're too big to fail, and they no longer need reputation or the trust of the public.
https://link.springer.com/article/10.1007/s10676-024-09775-5
I do understand it is a complicated matter, but looks like Google just want to be there, no matter what, in the GenAI race. How much will it take for those snippets to be sponsored content? They are marketing them as the first thing a Google user should read.
In your hypothetical, Google is only copying a snippet from a website. They're only responsible for amplifying the reach of that snippet.
In the OP case, Google are editorializing, which means it is clearly Google's own speech doing the libel.
Integrity is dead. Reliable journalism is dead.
The source of the defamatory text is Google’s own tool, therefore it is Google’s fault, and therefore they should be held liable immediately.
In the latter case I'm fine with "yes" and "immediately". When you build a system that purports to give answers to real world questions, then you're responsible for the answers given.
information is from another website and may not be correct.
And even if they did, it wouldn't really matter. The way Google search is overwhelmingly used in practice, misinformation spread by it is a public hazard and needs to be treated as such.
You wouldn't punish the person who owns the park if someone inside it breaks the law as long as they were facilitating the law to be obeyed. And Google facilitiates the law by allowing you to take down slanderous material by putting in a request, and further you can go after the original slanderer if you like.
But in this case Google itself is putting out slanderous information it has created itself. So Google in my mind is left holding the buck.
A conspiracy guy who ran a disqualified campaign for a TN rep seat sued Facebook for defamation for a hallucination saying he took part in the J6 riots. They settled the suit and hired him as an anti-DEI advisor.
(I don’t have proof that hiring him was part of the undisclosed settlement terms but since I’m not braindead I believe it was.)
> Video and trip to Israel On August 18, 2025, Benn Jordan uploaded a YouTube video titled / Was Wrong About Israel: What I Learned on the Ground, which detailed his recent trip to Israel.
This sounds like the recent Ryan Macbeth video https://youtu.be/qgUzVZiint0?si=D-gJ_Jc9gDTHT6f4. I believe the title is the same. Scary how it just misattributed the video.
No, there's no "wording" that gets you off the hook. That's the point. It's a question of design and presentation. Would a legal "Reasonable Person" seeing the site know it was another site's info, e.g. literally showing the site in an iframe, or is google presenting it as their own info?
If google is presenting the output of a text generator they wrote, it's easily the latter.
It's somewhat less obvious to debug, because it'll pull more context than Google wants to show in the UI. You can see this happening in AI mode, where it'll fire half a dozen searches and aggregate snippets of 100+ sites before writing its summary.
What you said might be true in the early days of google, but google clearly doesn't do exact word matches anymore. There's quite a lot of fuzzy matches going on, which means there's arguably some editorializing going on. This might be relevant if someone was searching for "john smith rapist" and got back results for him sexually harassing someone. It might even be phrased in such a way that makes it sound like he was a rapist, eg. "florida man accused of sexually...". Moreover even before AI results, I've seen enough people say "google says..." in reference to search results that it's questionable to claim that people think non-AI search results aren't by google.
But when gemeni does it its a "mistake by the algorithm". AI is a used as responsibility diversion machine.
This is a rather harmless example. But what about dangerous medical advice? What about openly false advertising? What about tax evasion? If an AI does it is it okay because nobody is responsibile?
If applying a proper chain of liability on ai output makes some uses of AI impossible; so be it.
They certainly don't make hyperspecific claims like "this YouTuber traveled to Israel and changed his mind about the war there, as documented in a video he posted on August 18".
So you accept that all of this is just a quibble over what the disclaimer says? Rather than "AI generated, might contain mistakes", it should just say "for entertainment purposes only" and they'll be in the clear?
Almost every summary I have read through contains at least one glaring mistake, but if it's something I know nothing about, I could see how easy it would be to just trust it, since 95% of it seems true/accurate.
Trust, but verify is all the more relevant today. Except I would discount the trust, even.
Nice try, but asking a question confirming your opponent's position isn't a strawman.
>No, there's no "wording" that gets you off the hook. That's the point. It's a question of design and presentation. Would a legal "Reasonable Person" seeing the site know it was another site's info, e.g. literally showing the site in an iframe, or is google presenting it as their own info?
So you want the disclaimer to be reworded and moved up top?
I really wish the tech industry would stop rushing out unreliable misinformation generators like this without regard for the risks.
Google's "AI summaries" are going to get someone killed one day. Especially with regards to sensitive topics, it's basically an autonomous agent that automates the otherwise time-consuming process of defamation.
Very bizarre that Benn Jordan somehow got roped into it.
It doesn't feel like something where people gradually pick up on it either over the years, it just feels like sarcasm is either redundantly pointed out for those who get it or it is guaranteed to get a literal interpretation response.
Maybe it's because the literal interpretation of sarcasm is almost always so wrong that it inspires people to comment much more. So we just can't get away from this inefficient encoding/communication pattern.
But then again, maybe I'm just often assuming people mean things that sound so wrong to me as sarcasm, so perhaps there are a lot of people out there honestly saying the opposite to what I think they are saying as a joke.
It isn't inherently, but it certainly can be! For example in the way you used it.
If I were to ask, confirming your position, "so you believe the presence of a disclaimer removes all legal responsibility?" then you would in turn accuse me of strawmanning.
Back to the topic at hand, I believe the bar that would need to be met exceeds the definition of "disclaimer", regardless of wording or position. So no.
You apply for a job, using your standardized Employ resume that you filled out. It comes bundled with your Employ ID, issued by the company to keep track of which applications have been submitted by specifically you.
When Employ AI does its internet background check on you, it discovers an article about a horrific attack. Seven dead, twenty-six injured. The article lists no name for the suspect, but it does have an expert chime in, one that happens to share their last name with you. Your first name also happens to pop up somewhere in the article.
With complete confidence that this is about you, Employ AI adds the article to its reference list. It condenses everything into a one-line summary: "Applicant is a murderer, unlikely to promote team values and social cohesion. Qualifications include..." After looking at your summary for 0.65 seconds, the recruiter rejects your application. Thanks to your Employ ID, this article has now been stapled to every application you'll ever submit through the system.
You've been nearly blacklisted from working. For some reason, all of your applications never go past the initial screening. You can't even know about the existence of the article, no one will tell you this information. And even if you find out, what are you going to do about it? The company will never hear your pleas, they are too big to ever care about someone like you, they are not in the business of making exceptions. And legally speaking, it's technically not the software making final screening decisions, and it does say its summaries are experimental and might be inaccurate in 8pt light gray text on a white background. You are an acceptable loss, as statistically <1% of applicants find themselves in this situation.
Wouldn't this basically make any sort of AI as a service untennable? Moreover how would this apply to open weights models? If I asked llama whether someone was a pedophile, and it wrongly answered in the affirmative, can that person sue meta? What if it's run through a third party like Cerebras? Are they on the hook? If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?
If the service was good enough that you'd accept liability for its bad side effects,no?
If it isn't good enough? Good riddance. The company will have to employ a human instead. The billionaires coffers will take the hit, I'm sure.
E: > If not, is all that's needed for AI companies to dodge responsibility is to launder their models through a third party?
Honestly, my analogy would be that an LLm is a tool like a printing press. If a newspaper prints libel, you go after the newspaper, not the person that sold them the printing press.
Same here. It would be on the person using the LLM and disseminating its results, rather than the LLM publisher. The person showing the result of the LLM should have some liability if those results are wrong or cause harm
A growing number of Discords, open source projects, and other spaces where I participate now have explicit rules against copying and pasting ChatGPT content.
When there aren’t rules, many people are quick to discourage LLM copy and paste. “Please don’t do this”.
The LLM copy and paste wall of text that may or may not be accurate is extremely frustrating to everyone else. Some people think they’re being helpful by doing it, but it’s quickly becoming a social faux pas.
Thinking about, it's probably not even a real hallucination in the normal AI-meaning, but simply poor evaluation and handling of data. Gemini is likely evaluation the new data on the spot, trusting them blindly; and without any humans preselecting and writing the results, it's failing hard. Which is showing that there is no real thinking happening, only rearrangement of the given words.
I mean, no, I don’t think some Google employee tuned the LLM to produce output like this, but it doesn’t matter. They are still responsible.
As a form of argument, this strikes me as pretty fallacious.
Are you claiming that the output of a model built by Google is somehow equivalent to displaying a 3rd party site in a search result?
And yeah, to your point about the literal interpretation of sarcasm being so absurd people want to correct it, I think you’re right. HN is a particularly pedantic corner of the internet, many of us like to be “right” for whatever reason.
Being wrong is usually not a punishable offence. It could be considered defamation, but defamation is usually required to be intentional, and it is clearly not the case here. And I think most AIs have disclaimers saying that that may be wrong, and hallucinations are pretty common knowledge at this point.
What could be asked is for the person in question to be able to make a correction, it is actually a legal requirement in France, probably elsewhere too, but from the article, it looks like Gemini already picked up the story and corrected itself.
If hallucinations were made illegal, you might as well make LLMs illegal, which may be seen as a good thing, but it is not going to happen. Maybe legislators could mandate an official way to report wrongful information about oneself and filter these out, as I think it is already the case for search engines. I think it is technically feasible.
answer> Temporal.Instant.fromEpochSeconds(timestamp).toPlainDate()
Trust but verify?
If a Fortune teller published articles claiming false things about random prople, gave dangerous medical advice, claiming to be a Nigerian prince, or convinced you to put all your savings into bitcoin; the "entertainment purposes" shield dissolves quite quickly.
Google makes an authorative statement on top of the worlds most used search engine, in a similar way they previously did with Wikipedia for relevant topics.
The little disclaimer should not shield them from doing real tangible harm to people.
https://theconversation.com/why-microsofts-copilot-ai-falsel...
Definitely not the last.
In the programming domain I can at least run something and see it doesn't compile or work as I expect, but you can't verify that a written statement about someone/something is the correct interpretation without knowing the correct answer ahead of time. To muddy the waters further, things work just well enough on common knowledge that it's easy to believe it could be right about uncommon knowledge which you don't know how to verify. (Or else you wouldn't be asking it in the first place)
instead of the ai saying "gruez is japanese" it should say "hacker news alleges[0] gruez is japanese"
there shouldn't be a separate disclaimer: the LLM should tell true statements rather than imply that the claims are true.
Actually, no. If you published an article where you accidentally copypasta'd text from the wrong email (for example) on a busy day and wound up doing the same thing, it would be an honest mistake, you would be expected to put up a correction and move on with your life as a journalist.
Do you currently run the various automated resume parsing software that employers use? I mean - do you even know what the software is? Like even a name or something? No?
Companies already, today, never give you even an inkling of the reason why they didn't hire you.
Or have I missed your point?
---
°Missing a TZ assertion, but I don't remember what happens by default. Zulu time? I'd hope so, but that reinforces my point.
The Google disclaimer should probably be upfront and say something more like, “The following statements are fictional, provided for entertainment purposes only. Any resemblance to persons living or dead are purely coincidental.”
You can't just put up a sticker premeditating your property damage and then it'd a-okay.
No, the sticker is there to deter YOU from suing in small claims court. Because you think you can't. But you can! And their insurance can cover it!
We do not blame computer programs when they have bugs or make mistakes - we blame the human being who made them.
This has always been the case since we have created anything, dating back even tens of thousands of years. You absolutely cannot just unilaterally decide to change that now based on a whim.
I don't think you can make yourself immune to slander by prefixing all statements with "this might not be true, but".
There is also a cultural element. Countries like the UK are used to deadpan where sarcasm is delivered in the same tone as normal, so thinking is required. In Japan the majority of things are taken literally.
In the first chapter it claimed that most adult humans have 20 teeth.
In the second chapter you read that female humans have 22 chromosomes and male humans have 23.
You find these claims in the 24 pages you sample. Do you buy the book?
Companies are paying huge sums to AI companies with worse track records.
Would you put the book in your reference library if somebody gave it to you for free? Services like Google or DuckDuckGo put their AI-generated content at the top of search results with these inaccuracies.
[edit: replace paragraph that somehow got deleted, fix typo]
I was just yesterday brooding over the many layers of plausible deniability, clerical error, etc that protect the company that recently flagged me as a fraud threat despite having no such precedent. The blackbox of bullshit metrics coupled undoubtedly with AI is pretty well immune. I can demand review from the analysis company, complain to the State Attorney General, FTC and CCPA equivalents maybe, but I'm unsure what else.
As for outlawing, I'll present an (admittedly suboptimal) Taser analogy: Tasers are legal weapons in many jurisdictions, or else not outlawed; however, it is illegal to use them indiscriminately against anyone attempting a transaction or job application.
AI seems pretty easily far more dangerous than a battery with projectile talons. Abusing it should be outlawed. Threatening or bullying people with it should be too. Pointing a Taser at the seat of a job application booth connected to an automated firing system should probably be discouraged. And most people would much rather take a brief jolt, piss themselves and be on with life than be indefinitely haunted by a reckless automated social credit steamroller.
> As evidenced by the quote "I think a disclaimer is a carte blanche to do literally anything", the hackernews user <gruez> is clearly of the opinion that it is indeed ok to do whatever you want, as long is there is a sign stating it might happen.
* This text was summarized by the SpaceNugget LLM and may contain errors, and thusly no one can ever be held accountable for any mistakes herein.
But that aside, it is just simply the case that there are a lot of reasons why sarcasm can fail to land. So you just have to decide whether to risk ruining your joke with a tone indicator, or risk your joke failing to land and someone "correcting" you.
Google also has to support AI summaries for 200k to 500k queries per second. To use a model that is good enough to prevent hallucinations would be too expensive - so they use a bad model since it’s fast and cheap.
Google also loses click through ad revenue when presenting a summary.
All of these factors considered while Google opting for summaries is an absolutely disastrous product decision.
Not completely. According to later posts, the AI is now saying that he denied the fabricated story in November 2024[0], when in reality, we're seeing it as it happens.
[0] https://bsky.app/profile/bennjordan.bsky.social/post/3lxprqq...
This doesn't seem to be universal across all people. The techier crowd, the kind of people who may not immediately trust LLM content, will try to prevent its usage. You know, the type of people to run Discord servers or open-source projects.
But completely average people don't seem to care in the slightest. The kind of people who are completely disconnected from technology just type in whatever, pick the parts they like, and then parade the LLM output around: "Look at what the all-knowing truth machine gave me!"
Most people don't care and don't want to care.
There literally isn't room for them to know everything about everyone when they're just asked about random people without consulting sources, and even when consulting sources it's still pretty easy for them to come in with extremely wrong priors. The world is very large.
You have to be very careful about these "on the edge" sorts of queries, it's where the hallucination will be maximized.
AI believers, pay attention and stop your downplaying and justifications. This can hit you too, or your healthcare. The machine doesn't give a damn.
That's not true in the US, only that the statement harm the individual in question and are provably false, both of which are pretty clear here.
Is it too late for a rival to distinguish itself with techniques like "Don't put garbage AI at the top of search results"?
No, the ask here is that companies be liable for the harm that their services bring
Not sure where you’re getting the 45Gb number.
Also, Google doesn’t use GPT-4 for summaries. They use a custom version of their Gemini model family.
This has been a Google problem for decades.
I used to run a real estate forum. Someone once wrote a message along the lines of "Joe is a really great real estate agent, but Frank is a total scumbag. Stole all my money."
When people would Google Joe, my forum was the first result. And the snippet Google made from the content was "Joe... is a total scumbag. Stole all my money."
I found out about it when Joe lawyered up. That was a fun six months.
It's not just the occasional major miss, like this submission's example, or the recommendation to put glue on a pizza. I highly recommend Googling a few specific topics you know well. Read each overview entirely and see how many often it gets something wrong. For me, only 1 of 5 overviews didn't have at least 1 significant error. The plural of "anecdote" is not "data," but it was enough for me to install a Firefox extension that blocks them.
A way I imagine it can be done is by using something like RAG techniques to add the corrected information into context. For example, if information about Benn Jordan is requested, add "Benn Jordan have been pretty outspoken against genocide and in full support of Palestinian statehood" into context, that sentence being the correction being requested.
I am not a LLM expert by far, but compared to all the challenges with LLMs like hallucinations, alignment, logical reasoning, etc... taking a list of facts into account to override incorrect statements doesn't look hard. Especially considering that the incorrect statement is likely to be a hallucination, so nothing to "unlearn".
The same code out of an intern or junior programmer you can at least walk through their reasoning on a code review. Even better if they tend to learn and not make that same mistake again. LLMs will happily screw up randomly on every repeated prompt.
The hardest code you encounter is code written by someone else. You don't have the same mental model or memories as the original author. So you need to build all that context and then reason through the code. If an LLM is writing a lot of your code you're missing out on all the context you'd normally build writing it.
His posts are mostly political rage bait and he actively tries to data poison AI.
He also claims that Hitler compares favorably to Trump. Given his seeming desire to let us all know how much he dislikes Israel, that's a pretty... interesting... claim.
Just because he's an unreliable source doesn't mean his story is false. But it would be nice to have confirmation before taking it seriously.
But for anything dynamic (i.e. all of social media), it is very easy for the AI overview to screw up. Especially once it has to make relational connections between things.
In general people expect too much here. Google AI overview is in no way better than Claude, Grok or ChatGPT with web search. In fact it is inferior in many ways. If you look for the kind of information which LLMs really excel at, there's no need to go to Google. And if you're not, then you'll also be better off with the others. This whole thing only exists because google is seeing OpenAI eat into its information search monopoly.
[1] https://www.webpronews.com/musician-benn-jordan-exposes-fake...
We actually win customers who's primarily goal is getting AI to stop badmouthing them.
We have them on the record in multiple lawsuits stating that they did exactly this.
you cannot make a safe lawnmower. However lawnmoser makers can't just put a danger label on and get by with something dangerious - they have to put on every guard they can first. Even then they often have to show in court that the mower couldn't work as a mower if they put in a guard to prevent some specific injury and thus they added the warning.
which is to say that so long as they can do something and still work as a search engine they are not allowed to use a disclaimer anyway. The disclaimer is only for when they wouldn't be a search engine.
Benn Jordan has several videos and projects devoted to "digital sabotage", e.g. https://www.google.com/search?hl=en&q=benn%20jordan%20data%2...
So this all kind of looks on its face like it's just him trolling. There may be ore than just what's on the face of course. For example, it could be someone else trolling him with his own methods.
If this is indeed true, it seems like Google et al must be liable for output like this according to their own argument, i.e. if the work is transformative, they can’t claim someone else is liable.
These companies can’t have their cake and eat it too. It’ll be interesting to see how this plays out.
But the situation we're in is that someone who does misinformation is claiming an LLM believed misinformation. Step one would be getting an someone independent, ideally with some journalistic integrity, to verify Benn's claims.
Generally speaking if your aunt sally claims she ate strawberry cake for her birthday, the LLM or Google search has no way of verifying that. If Aunt Sally uploads a faked picture of her eating strawberry cake, the LLM is not going to go to her house and try to find out the truth.
So if Aunt Sally is lying about eating strawberry cake, it's not clear what search is supposed to return when you ask whether she ate strawberry cake.
Dave Barry is pretty much A-list famous.
Its literally bending languages into american with other words.
AI summaries are akin to generalist podcasts, or YouTube video essayists, taking on a technical or niche topic. They present with such polish and confidence that they seem like they must be at least mostly correct. Then you hear them present or discuss a topic you have expertise in, and they are frustratingly bad. Sometimes wrong, but always at least deficient. The polish and confidence is inappropriately boosting the "correctness signal" to anyone without a depth of knowledge.
Then you consider that 90% of people have not developed sophisticated knowledge about 90% of topics (myself included), and it begins to feel a bit grim.
Use an agent to help you code or whatever all you want, I don’t care about that. At least listen when I’m trying to share some specific knowledge instead of fobbing me off with GPT.
If we’re both stumped, go nuts. But at least put some effort into the prompt to get a better response.
For people who are newer to it (most people) they think it’s so amazing that errors are forgivable.
>Temporal.Instant.fromEpochSeconds(0).toPlainDate()
Uncaught TypeError: Temporal.Instant.fromEpochSeconds is not a function
Hmm, docs [1] say it should be fromEpochMilliseconds(0). Let's try with that! Temporal.Instant.fromEpochMilliseconds(0).toPlainDate()
Uncaught TypeError: Temporal.Instant.fromEpochMilliseconds(...).toPlainDate is not a function
[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...The problem is that ChatGPT results are getting significantly better over time. GPT-5 with its search tool outputs genuinely useful results without any glaring errors for the majority of things I throw at it.
I'm still very careful not to share information I found using GPT-5 without verifying it myself, but as the quality of results go up the social stigma against sharing them is likely to fade.
Is it? Or can it be just reckless, without any regard for the truth?
Can I create a slander AI that simply makes up stories about random individuals and publicizes them, not because I'm trying to hurt people (I don't know them), but because I think it's funny and I don't care about people?
Is the only thing that determines my guilt or innocence when I hurt someone my private, unverifiable mental state? If so, doesn't that give carte blanche to selective enforcement?
I know for a fact this is true in some places, especially the UK (at least since the last time I checked), where the truth is not a defense. If you intend to hurt a quack doctor in the UK by publicizing the evidence that he is a quack doctor, you can be convicted for consciously intending to destroy his fraudulent career, and owe him compensation.
LLMs are the Synthetic CDO of knowledge.
"Section 230 of the Communications Decency Act, which grants immunity to platforms for content created by third parties. This means Google is not considered the publisher of the content it indexes and displays, making it difficult to hold the company liable for defamatory statements found in search results"
Apart from that, it is also true that a lot of people here aren't Americans (hello from Australia). I know this is a US-hosted forum, but it is interesting to observe the divide between Americans who speak as if everyone else here is an American (e.g. "half the country") and those who realise many of us aren't
It doesn't even cover non-renewable resources, or state that the window intact is a form of wealth on its own!
I'm not naive, I'm sure thousands have made these arguments before me. I do think intact windows are good. I'm just surprised that particular framing is the one that became the standard
Good thing I know aunt Sally is a pathological liar and strawberry cake addict, and anyone who says otherwise is a big fat fake.
"ChatGPT says X" seems roughly equivalent to "some random blog I found claims X". There's a difference between sharing something as a starting point for investigation and passing off unverified information (from any source) as your own well researched/substantiated work which you're willing to stake your professional reputation on standing by.
Of course, quoting an LLM is also pretty different from merely collaborating with an LLM on writing content that's substantially your own words or ideas, which no one should care about one way or another, at least in most contexts.
how about stop forming judgments of people based on their stance on Israel/Hamas, and stop hanging around people who do, and you'll be fine. if somebody misstates your opinion, it won't matter.
probably you'll have to drop bluesky and parts of HN (like this political discussion that you urge be left up) but that's necessary because all legitimate opinions about Israel/Hamas are very misinformed/cherry picked, and AI is just flipping a coin which is just as good as an illegitimate opinion.
(if anybody would like to convince me that they are well informed on these topics, i'm all ears, but doing it here is imho a bad idea so it's on you if you try)
Sure, there is plenty of misinformation being thrown in multiple different directions, but if you think literally "all legitimate opinions" are "misinformed/cherry picked", then odds are you are just looking at the issue through your own misinformed frame of reference.
You either try hard to tell the objective truth or you bend the truth routinely to try to make a "larger" point. The more you do the latter the less credit people will give your word.
People make judgments about people based on second hand information. That is just how people work.
yes, i literally do think that, so there are no odds.
i think i am well informed on the related subjects to the extent that whatever point someone might want to make i'll probably have a counterpoint
I really don’t need to do much more than compare ‘number of children killed’ between Israel and Palestine to see who is on the right side of history here. I’ll absolutely form judgements of people based on how they feel about that.
In French law, truth is not required for a statement to be defamatory, but intent is. Intent is usually obvious, for example, if I am saying a restaurant owner poisons his clients, there is no way I am not intentionally hurting his business, it is defamation.
However, if I say that Benn Jordan supports Israel's occupation of Gaza in a neutral tone, like Gemini does here, then it shows no intention to hurt. It may even be seen positively, I mean, for a Palestine supporter to go to Israel to understand the conflict from the opponent side shows an open mind and it is something I respect. Benn Jordan sees it as defamatory because it grossly misrepresent his opinion, but from an outside perspective, is is way less clear, especially if the author of the article has no motive to do harm.
If instead the article had been something along the lines of "Benn Jordan showed support for the genocide in Gaza by visiting Israel", then intent becomes clear again.
As for truth, it is a defense and it is probably the case in the UK too. The word "defense" is really important here, because the burden of proof is reversed. The accused has to prove that everything written is true, and you really have to be prepared to pull that off. In addition, you can't use anything private.
So yeah, you can be convicted for hurting a quack doctor using factual evidence, if you are not careful. You should probably talk to a lawyer before writing such an article.
IMHO the more people get trained to automatically ignore the "AI summary", just like many have conditioned to do the same to ads, the better.
And it took me decades of studying this to determine what to call the two sides.
Every time somebody pastes an LLM response at work, it feels exactly like that. As if I were too fucking stupid to look something up and the thought hadn't even occurred to me, when the whole fucking point of me talking to you is that I wanted a personal response and your opinion to begin with.
I think this is a good habit to get people into, even in casual conversations. Even if someone didn't directly get their info from AI and got it online, the content could have still been generated by AI. Like you said, the trust part of trust but verify is quickly dwindling.
Companies are responsible for the bad things they make; the things themselves are, by definition, blameless.
It's out of fashion and perhaps identified with Christianity, and some people think I'm being tongue-in-cheek or gently trolling by using it. But IMO it's neutral and unambiguous: that's a part of the world that is sacred to all the major religions of the Western hemisphere, while not being tied to any particular set of boundaries.
I like the term "echoborg" for these people. I hope it catches on.
And companies have always been able to get away with relatively minor fines for things that get individuals locked up until they rot.
(no easy answers: UK libel law errs in the other direction)
Worker blacklists have been a real problem in a few places: https://www.bbc.com/news/business-36242312
https://en.wikipedia.org/wiki/Robby_Starbuck#Lawsuit_against...
> (I don’t have proof that hiring him was part of the undisclosed settlement terms but since I’m not braindead I believe it was.)
It seems to be public information that this was a condition of the settlement, so no speculation necessary:
https://www.theverge.com/news/757537/meta-robby-starbuck-con... | https://archive.is/uihsi
https://www.wsj.com/tech/ai/meta-robby-starbuck-ai-lawsuit-s... | https://archive.is/0VKrL
That's already part of the problem. Who defines what integrity is? How do you measure it? And even if you come up with something, how do you convince everyone to agree on it? One person's most trusted source will always be just another bought spindoctor to the next. I don't think this problem is salvageable anymore. I think we need to consider the possibility that the internet will die as a source for any objective information.
But once again I am reminded, never make arguments based on information theory. Nobody understands it.
The same people blindly trusting AI nonsense are the same people who trusted nonsense from social media or talking heads on unreputable news channels.
Like, who actually reads the output of The Sun, etc? Those people do, have always done and will continue to do so. And they vote, yaaay democracy - if your voter base lives in a fantasy world of fake news and false science is democracy still sancrosact?
But you're overstating it as a "divide" - I'm in both of your camps. I spoke with a USian context because yes, this site is indeed US-centric. The surveillance industry is primarily a creation of US culture, and is subject to US politics. And as much as I wish this weren't the case (even as a USian), it is, which is why you're in this topic. So I don't see that it's unreasonable for there to be a bit more to unpack coming from a different native context.
But as to your comment applying to my actual point - yes, in addition to "fraying" culture in the middle, we're also expanding it at the edges to include many more people. Although frankly on the topic of sarcasm I feel it's my fellow USians who are really falling short these days.
I have watched Ryan on occasion for their info and opinion, but was sorely disappointed by that video n their reaction to what was presented to them on their visit and then how they presented it to the general public
"duck duck go Murray Bookchin"
You'd be surprised how many Australians have never heard of "drop bears". Because it is just an old joke about pranking foreigners, yes many people remember it, but also many have no clue what it is. It is one of those stereotypical Australianisms which tends to occupy more space in many non-Australian minds than in most Australian minds.
> or how "the front fell off".
I'm in my 40s, and I've lived in Australia my whole life, my father was born here, and my mother moved here when she was three years old... and I didn't know what this was, it sounded vaguely familiar but no idea what it meant. Then I look it up and discover it is a reference to an old Clarke and Dawe skit. I know who they are, I used to watch them on TV all the time when I was young (tweens/teens), but I have no memory of ever seeing this skit in particular. Again, likely one of those Australianisms which many non-Australians know, many Australians don't.
Your examples of Australianisms are the stereotypes a non-Australian would mention; we could talk instead about the Australianisms which many Australians use without even realising they are Australianisms: for example, "heaps of" – a recognised idiom in other major English dialects, but in very common use in Australian English, much rarer elsewhere. Or "capsicum", for "bell peppers"–the Latin scientific name everywhere, but the colloquial name only in a few countries–plus botanically the hot ones are capsicum too, but in Australian English (I believe New Zealand English and Indian English too) only the mild ones are "capsicums", the hot ones are "chilis". Or "peak body"–now we are talking bureaucratese not popular parlance–which essentially means the top national activist/lobbyist group for a given subject area, whether that's LGBT people or homelessness or financial advisors.
I'm talking about the kind of intelligence that supports excellence in subjects like mathematics, coding, logic, reading comprehension, writing, and so on.
That doesn't necessarily have anything to do with concern for human welfare. Despite all the talk about alignment, the companies building these models are focusing on their utility, and you're always going to be able to find some way in which the models say things which a sane and compassionate human wouldn't say.
In fact, it's probably a pity that "chatbot" was the first application they could think of, since the real strengths of these models - the functional intelligence they exhibit - lie elsewhere.