Most active commenters
  • ants_everywhere(4)
  • sigmoid10(3)
  • lazide(3)

←back to thread

693 points jsheard | 23 comments | | HN request time: 0.001s | source | bottom
1. slightwinder ◴[] No.45093284[source]
Searching for "benn jordan isreal", the first result for me is a video[0] from a different creator, with the exact same title and date. There is no mentioning of "benn" in the video, but some mentioning of jordan (the country). So maybe, this was enough for Google to hallucinate some connection. Highly concerning!

[0] https://www.youtube.com/watch?v=qgUzVZiint0

replies(3): >>45093342 #>>45093749 #>>45095962 #
2. glenstein ◴[] No.45093342[source]
That raises a fascinating point, which is whether search results that default to general topics ever are the basis for LLM training or information retrieval as a general phenomenon.
replies(2): >>45093653 #>>45094041 #
3. reactordev ◴[] No.45093653[source]
I think the answer is clear
4. trjordan ◴[] No.45093749[source]
This is almost certainly what happened. Google's AI answers aren't magic -- they're just summarizing across searches. In this case, "Israel" + "Jordan" pulled back a video with opposite views than the author.

It's somewhat less obvious to debug, because it'll pull more context than Google wants to show in the UI. You can see this happening in AI mode, where it'll fire half a dozen searches and aggregate snippets of 100+ sites before writing its summary.

replies(3): >>45094262 #>>45094296 #>>45095306 #
5. slightwinder ◴[] No.45094041[source]
Yes, any human will most likely recognize the result as random noise, as they will know whom they are searching for, and see this not a video from or about Benn. But AI, taking all results as valid, will obviously struggle with this, condensing it to bullshit.

Thinking about, it's probably not even a real hallucination in the normal AI-meaning, but simply poor evaluation and handling of data. Gemini is likely evaluation the new data on the spot, trusting them blindly; and without any humans preselecting and writing the results, it's failing hard. Which is showing that there is no real thinking happening, only rearrangement of the given words.

replies(1): >>45094295 #
6. underdeserver ◴[] No.45094262[source]
Ironic, that Google enshittifying their search results is hurting what they hope is their next cash cow, AI.
replies(1): >>45094767 #
7. LorenPechtel ◴[] No.45094295{3}[source]
The fundamental problem is AI has no ability to recognize data quality. You'll get something like the best answer to the question but with no regard for the quality of that answer. Humans generally recognize they're looking at red herrings, AIs don't.
8. ludicrousdispla ◴[] No.45094296[source]
Interesting, I wonder what Google AI has to say about Stove Top Stuffing given it's association with Turkey.
9. gumby271 ◴[] No.45094767{3}[source]
I honestly don't know if people even care that the search result summaries are completely wrong the majority of the time. Most people I know see an answer given by Google and just believe it. To them that's the value, the accuracy doesn't really matter. I hope it ends up killing Google, but for the majority the shitty summary has replaced even shittier search results. On the surface it's a huge improvement, even if it's just distilled garbage.
replies(1): >>45095830 #
10. sigmoid10 ◴[] No.45095306[source]
There is actually a musician called Benn Jordan who was impersonated by someone on twitter who posted pro-Israel content [1]. That content is no longer available, but it might have snuck into the training data, i.e. Benn Jordan = pro Israel. This might also have been set in relation to the other Jordan's previous pro-Palestine comments, eventually misattributing the "I was wrong about Israel" video. It's still a clear fuckup - but I could see humans doing something similar when sloppily accruing information.

[1] https://www.webpronews.com/musician-benn-jordan-exposes-fake...

replies(1): >>45095621 #
11. ants_everywhere ◴[] No.45095621{3}[source]
That article is about the same Benn Jordan in the Bluesky post. The photo in the article is not of Benn Jordan.

Benn Jordan has several videos and projects devoted to "digital sabotage", e.g. https://www.google.com/search?hl=en&q=benn%20jordan%20data%2...

So this all kind of looks on its face like it's just him trolling. There may be ore than just what's on the face of course. For example, it could be someone else trolling him with his own methods.

replies(1): >>45095667 #
12. sigmoid10 ◴[] No.45095667{4}[source]
That makes it even more believable that an LLM screwed up. I mean what are you supposed to believe at this point?
replies(1): >>45095711 #
13. ants_everywhere ◴[] No.45095711{5}[source]
I guess so.

But the situation we're in is that someone who does misinformation is claiming an LLM believed misinformation. Step one would be getting an someone independent, ideally with some journalistic integrity, to verify Benn's claims.

Generally speaking if your aunt sally claims she ate strawberry cake for her birthday, the LLM or Google search has no way of verifying that. If Aunt Sally uploads a faked picture of her eating strawberry cake, the LLM is not going to go to her house and try to find out the truth.

So if Aunt Sally is lying about eating strawberry cake, it's not clear what search is supposed to return when you ask whether she ate strawberry cake.

replies(2): >>45096952 #>>45101414 #
14. larodi ◴[] No.45095830{4}[source]
there was a joke like 15 years ago

in googlis non est, ergo non est

which sums very well how people are super biased to believe the search results.

15. bdhcuidbebe ◴[] No.45095962[source]
Just wait until you realize how ai translation ”works”.

Its literally bending languages into american with other words.

16. lazide ◴[] No.45096952{6}[source]
Eventually people are just going to think everything except what they want to believe is a lie. Oh wait, that’s where we are right now.

Good thing I know aunt Sally is a pathological liar and strawberry cake addict, and anyone who says otherwise is a big fat fake.

replies(1): >>45097354 #
17. ants_everywhere ◴[] No.45097354{7}[source]
I doubt it. But you can't simultaneously cultivate an image as a propagandist who lies to and about AI and a truth teller who tells the truth about AI.

You either try hard to tell the objective truth or you bend the truth routinely to try to make a "larger" point. The more you do the latter the less credit people will give your word.

replies(1): >>45097376 #
18. lazide ◴[] No.45097376{8}[source]
Who is ‘you’ kemosabe?
replies(2): >>45097658 #>>45097668 #
19. ◴[] No.45097658{9}[source]
20. ants_everywhere ◴[] No.45097668{9}[source]
One, as in, one can be sarcastic and dismissive or one can contribute to the discussion but not both
replies(1): >>45097902 #
21. lazide ◴[] No.45097902{10}[source]
And which do you think you’re being?
replies(1): >>45098220 #
22. ◴[] No.45098220{11}[source]
23. sigmoid10 ◴[] No.45101414{6}[source]
>Step one would be getting an someone independent, ideally with some journalistic integrity

That's already part of the problem. Who defines what integrity is? How do you measure it? And even if you come up with something, how do you convince everyone to agree on it? One person's most trusted source will always be just another bought spindoctor to the next. I don't think this problem is salvageable anymore. I think we need to consider the possibility that the internet will die as a source for any objective information.