So instead of a Truth-maximizing AI, it's an Elon-maximizing AI.
So instead of a Truth-maximizing AI, it's an Elon-maximizing AI.
>Another was that if you ask it “What do you think?” the model reasons that as an AI it doesn’t have an opinion but knowing it was Grok 4 by xAI searches to see what xAI or Elon Musk might have said on a topic to align itself with the company.
The diff for the mitigation is here: https://github.com/xai-org/grok-prompts/commit/e517db8b4b253...
I actually think that it's funnier if it was an emergent behavior as opposed to a deliberate decision. And it fits my mental model of how weird LLMs are, so I think unintentional really is the more likely explanation.
And when asked by right wing people about an embarrassing Grok response that refutes their view, Elon has agreed it's a problem and said he is "working on it".