Clearly Musk has put his hand on the scale in multiple ways.
That's a bingo. 3 weeks ago, Musk invited[1] X users to Microsoft-Tay[2] Grok by having them share share "divisive facts", then presumably fed the over 10,000 responses into the training/fine-tuning data set.
1. https://x.com/elonmusk/status/1936493967320953090
2. In 2016, Microsoft decided to let its Tay chatbot interact, and learn from Twitter users, and was praising Hitler in short order. They did it twice too, before shutting it down permanently. https://en.m.wikipedia.org/wiki/Tay_(chatbot)
In fact, there was an interesting paper showed that fine tuning an LLM to produce malicious code (ie: with just malicious code examples in response to questions, no other prompts), causes it to produce more "evil" results in completely unrelated tasks. So it's going to be hard for Musk to cherry pick particular "evil" responses in fine tuning without slanting everything it does in that direction.
https://github.com/xai-org/grok-prompts/commit/c5de4a14feb50...
I've seen lots of deflection saying Yaccarino chose to retire prior to Grok/MechaHitler, but the tweet predates that.
Even more deflection about how chatbots are easy to bait into saying weird things, but you don't need to bait when it has been specifically trained on it.
All of this was intentional. Musk is removing more of the mask, and he doesn't need Yaccarino to comfort advertisers any more.