←back to thread

693 points jsheard | 1 comments | | HN request time: 0.198s | source
Show context
nerevarthelame ◴[] No.45094942[source]
Most people Google things they're unfamiliar with, and whatever the AI Overview generates will seem reasonable to someone who doesn't know better. But they are wrong a lot.

It's not just the occasional major miss, like this submission's example, or the recommendation to put glue on a pizza. I highly recommend Googling a few specific topics you know well. Read each overview entirely and see how many often it gets something wrong. For me, only 1 of 5 overviews didn't have at least 1 significant error. The plural of "anecdote" is not "data," but it was enough for me to install a Firefox extension that blocks them.

replies(3): >>45094978 #>>45095230 #>>45096065 #
chao- ◴[] No.45096065[source]
Here's my paraphrase of the best description for "seems reasonable" AI misinformation that I've yet seen. I wish I could credit where I first heard it:

AI summaries are akin to generalist podcasts, or YouTube video essayists, taking on a technical or niche topic. They present with such polish and confidence that they seem like they must be at least mostly correct. Then you hear them present or discuss a topic you have expertise in, and they are frustratingly bad. Sometimes wrong, but always at least deficient. The polish and confidence is inappropriately boosting the "correctness signal" to anyone without a depth of knowledge.

Then you consider that 90% of people have not developed sophisticated knowledge about 90% of topics (myself included), and it begins to feel a bit grim.

replies(1): >>45096889 #
1. rkagerer ◴[] No.45096889[source]
In short, they are confident morons.