Most active commenters
  • amrocha(7)
  • rockemsockem(5)
  • timschmidt(4)
  • MetaWhirledPeas(4)
  • arp242(3)

←back to thread

539 points donohoe | 38 comments | | HN request time: 1.876s | source | bottom
Show context
Hoasi ◴[] No.44511157[source]
X has been nothing short of an exercise in brand destruction. However, despite all the drama, it still stands, it still exists, and it remains relevant.
replies(23): >>44511323 #>>44511451 #>>44511453 #>>44511457 #>>44511712 #>>44512087 #>>44512184 #>>44512275 #>>44512704 #>>44513825 #>>44513960 #>>44514302 #>>44514688 #>>44516258 #>>44517308 #>>44517368 #>>44517871 #>>44517980 #>>44519236 #>>44519282 #>>44520336 #>>44520826 #>>44522391 #
mrweasel ◴[] No.44511712[source]
More and more I think Musk managed to his take over of Twitter pretty successfully. X still isn't as strong a brand as Twitter where, but it's doing okay. A lot of the users who X need to stay on the platform, journalists and politicians, are still there.

The only issue is that Musk vastly overpaid for Twitter, but if he plans to keep it and use it for his political ambitions, that might not matter. Also remember that while many agree that $44B was a bit much, most did still put Twitter at 10s of billions, not the $500M I think you could justify.

The firings, which was going to tank Twitter also turned out reasonably well. Turns out they didn't need all those people.

replies(14): >>44511868 #>>44512165 #>>44512334 #>>44512898 #>>44513148 #>>44513174 #>>44513350 #>>44514035 #>>44514544 #>>44514680 #>>44515018 #>>44516438 #>>44517692 #>>44518854 #
1. threetonesun ◴[] No.44511868[source]
Well sure if you give up on moderation, and close the platform to people who aren't signed in, and shut off the API then yes you didn't need the people supporting those parts of the platform.

And I guess if you consider "the place with the MechaHitler AI" as good branding there's no arguing with you that it's doing just as well as Twitter.

replies(2): >>44512101 #>>44512116 #
2. mrweasel ◴[] No.44512101[source]
I don't agree with the direction Musk has set for X, but businesswise it's not doing worse. Twitter was a financial catastrophe before the take over, so you didn't need much improvement. Moderation was a financial drain, the API didn't make them any money and none of the users seems to care all that much about the platform not being open to users without an account... because they all have accounts and wasn't able to interact with you anyway.

The media seems to get a good laugh out if Grok arguing the plight of white South Africans and is fondness to Hitler, but I'm not seeing journalists and politicians leaving X in droves because of it.

replies(5): >>44512258 #>>44512363 #>>44513205 #>>44513798 #>>44517300 #
3. rockemsockem ◴[] No.44512116[source]
I will fondly remind folks that Grok isn't even the first LLM to become a Nazi on Twitter.

Remember Tay Tweets?

https://en.m.wikipedia.org/wiki/Tay_(chatbot)

Honestly I really don't think a bad release of an LLM that was rolled back is really the condemnation you think it is.

replies(2): >>44512219 #>>44512638 #
4. amrocha ◴[] No.44512219[source]
There’s a difference between a 3rd party twitter bot and grok. And it’s not a “bad release”, it’s been like this ever since it launched.

Funny how ChatGPT is vanilla and grok somehow has a new racist thing to say every other week.

replies(3): >>44512620 #>>44513304 #>>44513883 #
5. amrocha ◴[] No.44512258[source]
Most of the local journalists, politicians, game devs, and open source maintainers i followed left. It’s just US national pundits, bots, and bait monetization accounts there at this point.
6. greenie_beans ◴[] No.44512363[source]
you must not know many journalists because they certainly left in droves
replies(1): >>44516537 #
7. timschmidt ◴[] No.44512620{3}[source]
This ChatGPT? https://futurism.com/chatgpt-encouraged-murder-sam-altman
replies(1): >>44515302 #
8. blargey ◴[] No.44512638[source]
I don’t think the third+ flavor of “bad release” this year, of the sort nobody else in this crowded space suffers from, is as innocuous as you think it is.

And Tay was a non-LLM user account released a full 6 years before ChatGPT; you might as well bring up random users’ markov chains.

replies(1): >>44513907 #
9. archagon ◴[] No.44513205[source]
The job of journalists and politicians is to broadcast to as wide an audience as they can. It is not particularly surprising that many retain Twitter accounts for the marketing value.
replies(1): >>44514078 #
10. MetaWhirledPeas ◴[] No.44513304{3}[source]
> Funny how ChatGPT is vanilla and grok somehow has a new racist thing to say every other week

To be fair, 'exposing' ChatGPT, Claude, and Gemini as racist will get you a lot fewer clicks.

Musk claims Grok to be less filtered in general than other LLMs. This is what less filtered looks like. LLMs are not human; if you get one to say racist things it's probably because you were trying to make it say racist things. If you want this so-called problem solved by putting bowling bumpers on the bot, by all means go use ChatGPT.

replies(3): >>44514913 #>>44515232 #>>44516458 #
11. kevinventullo ◴[] No.44513798[source]
I don’t think we can say for sure whether it’s doing worse businesswise since the numbers aren’t public. But consider e.g. https://www.adweek.com/media/advertisers-returning-to-x/

“From January to September 2024, marketing intelligence platform MediaRadar found that (X’s former top advertisers including Comcast, IBM, Disney, Warner Bros. Discovery, and Lionsgate Entertainment) collectively spent less than $3.3 million on X. This is a 98% year-over-year drop from the $170 million spent during the same period in 2023.”

12. rockemsockem ◴[] No.44513883{3}[source]
It absolutely has not been claiming that it's "MechaHitler" since it was released.

Try.

replies(1): >>44515258 #
13. rockemsockem ◴[] No.44513907{3}[source]
I posted the Wikipedia page, do you really think I don't know how long ago Tay was? I don't think the capabilities matter if we're just talking about chat bots being racist online.

Also IDK what you mean by third+ flavor? I'm not familiar with other bad Grok releases, but I don't really use it, I just see it's responses on Twitter. Also do you not remember the Google image model that made the founding fathers different races by default?

replies(1): >>44516405 #
14. bikezen ◴[] No.44514078{3}[source]
After NPR left twitter they saw a 1% drop in traffic from socials. It is not a useful platform.

Source: https://niemanreports.org/npr-twitter-musk/

15. mrguyorama ◴[] No.44514913{4}[source]
>This is what less filtered looks like

It's so "less filtered" that they had to add a requirement in the system prompt to talk about white genocide

This idea that "less filtered" LLMs will be "naturally" very racist is something that a lot of racists really really want to be true because they want to believe their racist views are backed by data.

They are not.

replies(1): >>44515014 #
16. MetaWhirledPeas ◴[] No.44515014{5}[source]
I asked MS Copilot, "Did the Grok team add a requirement in the system prompt to talk about white genocide?"

Answer: "I can't help with that."

This is not helping your case.

Gemini had a better response: "xAI later stated that this behavior was due to an 'unauthorized modification' by a 'rogue employee'."

replies(3): >>44515282 #>>44515287 #>>44516695 #
17. amrocha ◴[] No.44515232{4}[source]
Nobody’s trying to get grok to talk about MechaHitler. At that point you just know Musk said that out loud in a meeting and someone had to add it to groks base prompt.
18. amrocha ◴[] No.44515258{4}[source]
Right, it’s just been talking about white genocide and generating nazi images instead.
replies(1): >>44515363 #
19. amrocha ◴[] No.44515282{6}[source]
Avoiding sensitive subjects is not the same thing as endorsing racist views if that’s what you’re implying.
replies(1): >>44517309 #
20. amrocha ◴[] No.44515302{4}[source]
Not to say there aren’t problems with ChatGPT, but it generally steers clear of controversial subjects unless coaxed into it.

Grok actively leans into racism and nazism.

replies(1): >>44516006 #
21. rockemsockem ◴[] No.44515363{5}[source]
What Nazi images?

The white genocide thing I remember hearing about and looked really forced

22. timschmidt ◴[] No.44516006{5}[source]
It seems that there is tremendous incentive for people like yourself (I see you're very active in these comments) to claim that. But I see you've presented no quantitative evidence. Given the politicization of the systems and individuals involved, without evidence, it all reads like partisan mud slinging.

Any LLM can be convinced to say just about anything. Pliny has shown that time and time again.

replies(1): >>44516493 #
23. sjsdaiuasgdia ◴[] No.44516405{4}[source]
To catch you up, this happened 2 months ago -

https://www.theguardian.com/technology/2025/may/14/elon-musk...

replies(1): >>44517351 #
24. arp242 ◴[] No.44516458{4}[source]
> if you get one to say racist things it's probably because you were trying to make it say racist things.

When it started ranting about the Jews and "Mecha Hitler" it was unprompted on unrelated matters. When it started ranting about "white genocide" in SA a while ago it was also unprompted on unrelated matters.

So no.

25. arp242 ◴[] No.44516493{6}[source]
Does ChatGPT start ranting about Jews and "White Genocide" unprompted? How can I even quantify that it doesn't do that?

This is a classic "anything that can't be empirically measured is invalid and can be dismissed" mistake. It would be nice if we could easily empirically measure everything, but that's not how the world works.

The ChatGPT article is of a rather different nature where ChatGPT went off the rails after a long conversation with a troubled person. That's not good, but just no the same as "start spewing racism on unrelated questions".

replies(2): >>44516588 #>>44523505 #
26. Lu2025 ◴[] No.44516537{3}[source]
Left where?
replies(1): >>44516810 #
27. timschmidt ◴[] No.44516588{7}[source]
Friend, if you can't empirically measure the outputs of LLMs which provide lovely APIs for doing so, what are you doing?

20 lines of code and some data would really bolster your case, but I don't see them.

replies(2): >>44518947 #>>44521615 #
28. saagarjha ◴[] No.44516695{6}[source]
If you're asking a coding LLM about facts I don't really think you are capable of evaluating the case at all.
replies(1): >>44517323 #
29. greenie_beans ◴[] No.44516810{4}[source]
twitter
30. freejazz ◴[] No.44517300[source]
Well, the HitlerGrok thing happened yesterday...

I ask this genuinely and without any intent to cause offense: given your name, are you a bit?

31. MetaWhirledPeas ◴[] No.44517309{7}[source]
No I'm saying the consequences of over-filtering are apparent with Copilot 's response: no answer.

And I'm also saying Grok was reportedly sabotaged into saying something racist (which is a blatantly obvious conclusion even without looking it up), and that seeing this as some sort of indictment against it is baseless.

And since I find myself in the position of explaining common sense conclusions here's one more: you don't succeed in making a racist bot by asking it to call itself Mecha Hitler. That is a fast way to fail in your goal of being subversive.

32. MetaWhirledPeas ◴[] No.44517323{7}[source]
If you wish to do better, please enlighten us with facts and sources.
replies(1): >>44518242 #
33. rockemsockem ◴[] No.44517351{5}[source]
Yes, I had forgotten about that. Was super weird and forced into conversations from what I saw.

Definitely a bit of a trend now with mecha hitler...

34. saagarjha ◴[] No.44518242{8}[source]
Why should I do extra work when you are unwilling to do so?
35. amrocha ◴[] No.44518947{8}[source]
idk friend, it seems kind of presumptuous to demand other people’s time like this.

It’s pretty evident that the people building grok are injecting their ideology into it.

I don’t need more evidence, and I don’t need you to agree with me. Go ahead and write those 20 lines if you so desire. I’m happy to be proven wrong.

replies(1): >>44522292 #
36. arp242 ◴[] No.44521615{8}[source]
You can't just run a few queries and base conclusion off that, you need to run tens of thousands of different ones and then somehow evaluate the responses. It's a huge amount of work.

Demanding empirical data and then coming up with shoddy half-arsed methodology is unserious.

37. timschmidt ◴[] No.44522292{9}[source]
I don't think I'm the one being presumptuous or demanding. I've actually tried to help you make a stronger argument. Shooting a hundred or even a thousand queries to 3 or 4 LLMs and shoving the results through established sentiment analysis algorithms is something ChatGPT can one-shot in just about any language. You demand people agree with your opinion and refuse to spend 20 minutes supporting it with facts. Not my problem, I tried to help. You may not see it that way. That's fine.
38. engineer_22 ◴[] No.44523505{7}[source]
> Does ChatGPT start ranting about Jews and "White Genocide" unprompted? How can I even quantify that it doesn't do that?

Grok doesnt do that.