So many reports like this, it's not a question of working out the kinks. Are we getting close to our very own Stop the Slop campaign?
replies(3):
I don't think so. We read about the handful of failures while there are billions of successful queries every day, in fact I think AI Overviews is sticky and here to stay.
"...Semantic Errors / Hallucinations On factual queries—especially legal ones—models hallucininate roughly 58–88% of the time
A journalism‑focused study found LLM-based search tools (e.g., ChatGPT Search, Perplexity, Grok) were incorrect in 60%+ of news‑related queries
Specialized legal AI tools (e.g., Lexis+, Westlaw) still showed error rates between 17% and 34%, despite being domain‑tuned "