The only issue is that Musk vastly overpaid for Twitter, but if he plans to keep it and use it for his political ambitions, that might not matter. Also remember that while many agree that $44B was a bit much, most did still put Twitter at 10s of billions, not the $500M I think you could justify.
The firings, which was going to tank Twitter also turned out reasonably well. Turns out they didn't need all those people.
And I guess if you consider "the place with the MechaHitler AI" as good branding there's no arguing with you that it's doing just as well as Twitter.
Remember Tay Tweets?
https://en.m.wikipedia.org/wiki/Tay_(chatbot)
Honestly I really don't think a bad release of an LLM that was rolled back is really the condemnation you think it is.
Funny how ChatGPT is vanilla and grok somehow has a new racist thing to say every other week.
To be fair, 'exposing' ChatGPT, Claude, and Gemini as racist will get you a lot fewer clicks.
Musk claims Grok to be less filtered in general than other LLMs. This is what less filtered looks like. LLMs are not human; if you get one to say racist things it's probably because you were trying to make it say racist things. If you want this so-called problem solved by putting bowling bumpers on the bot, by all means go use ChatGPT.
It's so "less filtered" that they had to add a requirement in the system prompt to talk about white genocide
This idea that "less filtered" LLMs will be "naturally" very racist is something that a lot of racists really really want to be true because they want to believe their racist views are backed by data.
They are not.
Answer: "I can't help with that."
This is not helping your case.
Gemini had a better response: "xAI later stated that this behavior was due to an 'unauthorized modification' by a 'rogue employee'."
And I'm also saying Grok was reportedly sabotaged into saying something racist (which is a blatantly obvious conclusion even without looking it up), and that seeing this as some sort of indictment against it is baseless.
And since I find myself in the position of explaining common sense conclusions here's one more: you don't succeed in making a racist bot by asking it to call itself Mecha Hitler. That is a fast way to fail in your goal of being subversive.