←back to thread

693 points jsheard | 1 comments | | HN request time: 0.201s | source
Show context
nerevarthelame ◴[] No.45094942[source]
Most people Google things they're unfamiliar with, and whatever the AI Overview generates will seem reasonable to someone who doesn't know better. But they are wrong a lot.

It's not just the occasional major miss, like this submission's example, or the recommendation to put glue on a pizza. I highly recommend Googling a few specific topics you know well. Read each overview entirely and see how many often it gets something wrong. For me, only 1 of 5 overviews didn't have at least 1 significant error. The plural of "anecdote" is not "data," but it was enough for me to install a Firefox extension that blocks them.

replies(3): >>45094978 #>>45095230 #>>45096065 #
1. sigmoid10 ◴[] No.45095230[source]
I found it is very accurate for legacy static-web content. E.g. if you ask something that could easily be answered by looking at wikipedia or which has been answered in blogs, it will usually be right.

But for anything dynamic (i.e. all of social media), it is very easy for the AI overview to screw up. Especially once it has to make relational connections between things.

In general people expect too much here. Google AI overview is in no way better than Claude, Grok or ChatGPT with web search. In fact it is inferior in many ways. If you look for the kind of information which LLMs really excel at, there's no need to go to Google. And if you're not, then you'll also be better off with the others. This whole thing only exists because google is seeing OpenAI eat into its information search monopoly.