←back to thread

724 points simonw | 1 comments | | HN request time: 0s | source
Show context
dankai ◴[] No.44527362[source]
This is so in character for Musk and shocking because he's incompetent across so many topics he likes to give his opinion on. Crazy he would nerf the model of his AI company like that.
replies(4): >>44527378 #>>44528717 #>>44528760 #>>44532465 #
cedws ◴[] No.44528760[source]
It’s been said here before, but xAI isn’t really in the running to be on the leading edge of LLMs. It’s serving a niche of users who don’t want to use “woke” models and/or who are Musk sycophants.
replies(2): >>44528903 #>>44528909 #
gitaarik ◴[] No.44528909[source]
Actually the recent fails with Grok remind me of the early fails with Gemini, where it would put colored people in all images it generated, even in positions they historically never were in, like German second world war soldiers.

So in that sense, Grok and Gemini aren't that far apart, just the other side of the extreme.

Apparently it's very hard to create an AI that behaves balanced. Not too woke, and not too racist.

replies(1): >>44529365 #
1. diggan ◴[] No.44529365{3}[source]
> Apparently it's very hard to create an AI that behaves balanced. Not too woke, and not too racist.

Well, it's hard to build things we don't even understand ourselves, especially about highly subjective topics. What is "woke" for one person is "basic humanity" for another, and "extremism" for yet another person, and same goes for most things.

If the model can output subjective text, then the model will be biased in some way I think.