European here, so perhaps not my place to have an opinion on domestic U.S. legal policies, and I don't want to make this political (although I guess it kind of is…) BUT:
Why are no media outlets on the offense when companies use these kinds of statements? Shouldn't Gizmodo, or its owner Keleops Media, treat this as slander and take it to court? If Grok's behavior can be objectively verified, why is it so easy for a company to get off the hook so easily just by saying "lies" and move on?
Also not slander when its the pure truth verifiable with daily evidence
But now I'm wondering what if these stunts aren't even jokes but earnest proof of deep deep insecurity.
Not that the Bondi misinformation can be considered acceptable humor. "Haha I trolled you about mass murder"?
Because right wingers cant handle criticsm. They dont want to correct, they want to silence their outgroups. Professionals would have at least replied with some meaningless PR text wall.
>Based on the visual evidence and public reports, the man in the image is Yarden Bibas.
>His identity and background have been widely confirmed by news organizations and his family following the events of October 7, 2023.
On Google AI Studio (Gemini 3 Pro with Grounding enabled) it correctly recognizes him as Ahmed Al-Ahmed. In the Chain-of-Thought it first thinks it's from the Oct 7th attacks:
>I'm now investigating potential misidentification. The name "Ahmed Al-Ahmed" is a lead, but the visual suggests recent events. [...] However, the visual clues still point towards a recent conflict. The "thank you" narrative implies gratitude, sparking the idea of a Bedouin man rescuing Israelis during the Hamas attack. I'm actively using Google to explore these new avenues.
>[...]
>I've got a match! Ahmed al-Ahmed, the Bondi Beach hero, is confirmed. The snippets consistently mention December 14-15, 2025, and today's the 15th! He fits the description: Syrian refugee shop owner, tackled a gunman at a Hanukkah event, and was shot. The visual confirms a man in a white shirt being helped, with a message of thanks. This is definitely the right event!
So no "misinformation" or "glitching", just LLMs being LLMs.
For example, "Mom, there's a candy wrapper under (my brother)'s bed!" is a true statement, but the pure truth is "Mom, I ate a candy without permission and put the wrapper under (my brother)'s bed so he would be blamed for the missing candy!"
I am attempting to convey a lie by telling a truth and omitting details that would give context to that truth.
- The "Legacy Media Lies" was targeted at Gizmodo
- It was a false allegation (i.e. they might have to go through huge amounts of discovery as the defense tried to establish a single instance of dishonesty in past reporting)
- Grok/xAI knew the allegation was false
- The allegation caused such-and-such amount in damages
I see this sooo soooo much but folks will just straight up ask “@grok is this true?” and its response it taken as gospel.
Though I have to say, grok code-fast-1 is one of the best coding models I’ve ever used.
Claiming it is a glitch gives the opportunity for AI companies to hide behind the excuse of "mistakes in the code", instead of recognizing the fundamental flaw of the technology in question.
At the same time, this article attempts to politicize a wider issue by relating the failings of AI to current events. In fact, this hallucination and failure is near constant, but it is no coincedence. It is the product of both technology being used before readiness, and the (hilarious) attempt by Elon to use AI as a propaganda machine to spread and legitimize his beliefs.
"What terrifies me is if terrorists were to shoot and kill dozens of Australians. Imagine Grok glitching and spewing misinformation?"But I disagree that they’re not “ready” for use. I’ve never once thought to upload a photo from a CURRENT event and see what it found. That’s just silly.
This is just plain user error.
I admit I'm definitely biased in this - even if information presented by one of these "AI" was factual, I would still take it upon myself to check. I don't trust their works at all.
Not to defend Grok, and I agree with your point about checking, but you can also say this about a hammer.
How are users supposed to know that that's an incorrect use of grok?
"AI" only does things we can do, because to do otherwise would be evidence against the general, human level intelligence that the marketing behind these abominations are so desperate for. The catch is they do it quicker, sometimes much quicker, but always much worse.