It's somewhat less obvious to debug, because it'll pull more context than Google wants to show in the UI. You can see this happening in AI mode, where it'll fire half a dozen searches and aggregate snippets of 100+ sites before writing its summary.
Thinking about, it's probably not even a real hallucination in the normal AI-meaning, but simply poor evaluation and handling of data. Gemini is likely evaluation the new data on the spot, trusting them blindly; and without any humans preselecting and writing the results, it's failing hard. Which is showing that there is no real thinking happening, only rearrangement of the given words.
[1] https://www.webpronews.com/musician-benn-jordan-exposes-fake...
Benn Jordan has several videos and projects devoted to "digital sabotage", e.g. https://www.google.com/search?hl=en&q=benn%20jordan%20data%2...
So this all kind of looks on its face like it's just him trolling. There may be ore than just what's on the face of course. For example, it could be someone else trolling him with his own methods.
But the situation we're in is that someone who does misinformation is claiming an LLM believed misinformation. Step one would be getting an someone independent, ideally with some journalistic integrity, to verify Benn's claims.
Generally speaking if your aunt sally claims she ate strawberry cake for her birthday, the LLM or Google search has no way of verifying that. If Aunt Sally uploads a faked picture of her eating strawberry cake, the LLM is not going to go to her house and try to find out the truth.
So if Aunt Sally is lying about eating strawberry cake, it's not clear what search is supposed to return when you ask whether she ate strawberry cake.
Its literally bending languages into american with other words.
Good thing I know aunt Sally is a pathological liar and strawberry cake addict, and anyone who says otherwise is a big fat fake.
You either try hard to tell the objective truth or you bend the truth routinely to try to make a "larger" point. The more you do the latter the less credit people will give your word.
That's already part of the problem. Who defines what integrity is? How do you measure it? And even if you come up with something, how do you convince everyone to agree on it? One person's most trusted source will always be just another bought spindoctor to the next. I don't think this problem is salvageable anymore. I think we need to consider the possibility that the internet will die as a source for any objective information.