The only issue is that Musk vastly overpaid for Twitter, but if he plans to keep it and use it for his political ambitions, that might not matter. Also remember that while many agree that $44B was a bit much, most did still put Twitter at 10s of billions, not the $500M I think you could justify.
The firings, which was going to tank Twitter also turned out reasonably well. Turns out they didn't need all those people.
And I guess if you consider "the place with the MechaHitler AI" as good branding there's no arguing with you that it's doing just as well as Twitter.
Remember Tay Tweets?
https://en.m.wikipedia.org/wiki/Tay_(chatbot)
Honestly I really don't think a bad release of an LLM that was rolled back is really the condemnation you think it is.
Funny how ChatGPT is vanilla and grok somehow has a new racist thing to say every other week.
And Tay was a non-LLM user account released a full 6 years before ChatGPT; you might as well bring up random users’ markov chains.
To be fair, 'exposing' ChatGPT, Claude, and Gemini as racist will get you a lot fewer clicks.
Musk claims Grok to be less filtered in general than other LLMs. This is what less filtered looks like. LLMs are not human; if you get one to say racist things it's probably because you were trying to make it say racist things. If you want this so-called problem solved by putting bowling bumpers on the bot, by all means go use ChatGPT.
Try.
Also IDK what you mean by third+ flavor? I'm not familiar with other bad Grok releases, but I don't really use it, I just see it's responses on Twitter. Also do you not remember the Google image model that made the founding fathers different races by default?
It's so "less filtered" that they had to add a requirement in the system prompt to talk about white genocide
This idea that "less filtered" LLMs will be "naturally" very racist is something that a lot of racists really really want to be true because they want to believe their racist views are backed by data.
They are not.
Answer: "I can't help with that."
This is not helping your case.
Gemini had a better response: "xAI later stated that this behavior was due to an 'unauthorized modification' by a 'rogue employee'."
The white genocide thing I remember hearing about and looked really forced
Any LLM can be convinced to say just about anything. Pliny has shown that time and time again.
https://www.theguardian.com/technology/2025/may/14/elon-musk...
When it started ranting about the Jews and "Mecha Hitler" it was unprompted on unrelated matters. When it started ranting about "white genocide" in SA a while ago it was also unprompted on unrelated matters.
So no.
This is a classic "anything that can't be empirically measured is invalid and can be dismissed" mistake. It would be nice if we could easily empirically measure everything, but that's not how the world works.
The ChatGPT article is of a rather different nature where ChatGPT went off the rails after a long conversation with a troubled person. That's not good, but just no the same as "start spewing racism on unrelated questions".
20 lines of code and some data would really bolster your case, but I don't see them.
And I'm also saying Grok was reportedly sabotaged into saying something racist (which is a blatantly obvious conclusion even without looking it up), and that seeing this as some sort of indictment against it is baseless.
And since I find myself in the position of explaining common sense conclusions here's one more: you don't succeed in making a racist bot by asking it to call itself Mecha Hitler. That is a fast way to fail in your goal of being subversive.
Definitely a bit of a trend now with mecha hitler...
It’s pretty evident that the people building grok are injecting their ideology into it.
I don’t need more evidence, and I don’t need you to agree with me. Go ahead and write those 20 lines if you so desire. I’m happy to be proven wrong.
Demanding empirical data and then coming up with shoddy half-arsed methodology is unserious.
Grok doesnt do that.