Most active commenters
  • (7)
  • ceejayoz(4)

←back to thread

542 points donohoe | 72 comments | | HN request time: 1.919s | source | bottom
1. ceejayoz ◴[] No.44510830[source]
I guess the Nazi chatbot was the last straw. Amazed she lasted this long, honestly.
replies(7): >>44510844 #>>44510846 #>>44510900 #>>44510931 #>>44510978 #>>44511446 #>>44516735 #
2. andsoitis ◴[] No.44510844[source]
As chief, her job is, amongst others, making sure that type of thing doesn’t happen.

Outcomes suggests she failed at that.

Hopefully the next chief will be better.

replies(6): >>44510876 #>>44511371 #>>44511470 #>>44511792 #>>44511829 #>>44514857 #
3. miroljub ◴[] No.44510846[source]
What is the Nazi chatbot?
replies(7): >>44510861 #>>44510879 #>>44510880 #>>44510887 #>>44510891 #>>44510981 #>>44511105 #
4. nickthegreek ◴[] No.44510861[source]
grok yesterday.
replies(1): >>44510924 #
5. JohnFen ◴[] No.44510876[source]
She was was never the chief, only the chief's main administrator.
replies(1): >>44514912 #
6. lode ◴[] No.44510879[source]
Grok, the xAI chatbot, went full neo-nazi yesterday:

https://www.theguardian.com/technology/2025/jul/09/grok-ai-p...

replies(1): >>44510923 #
7. ◴[] No.44510880[source]
8. perihelions ◴[] No.44510887[source]
https://news.ycombinator.com/item?id=44504709 ("Elon Musk's Grok praises Hitler, shares antisemitic tropes in new posts"—16 hours ago; 89 comments)
replies(1): >>44511363 #
9. theahura ◴[] No.44510891[source]
see here https://news.ycombinator.com/item?id=44510635
10. juujian ◴[] No.44510900[source]
I'm surprised the NYT article does not even mention it.
replies(1): >>44516268 #
11. ◴[] No.44510931[source]
12. zht ◴[] No.44510966{4}[source]
grok was praising hitler...
replies(1): >>44511304 #
13. Bender ◴[] No.44510978[source]
Not defending Elon or the infobot but my theory is that by leaving that LLM unfiltered people have learned how to gamify and manipulate it into having a fascist slant. I could even guess which groups of people are doing it but I will let them take credit and it's not likely actual neo-nazi's, they are too dumb and on too many drugs to manipulate an infobot. These groups like to LARP to piss everyone off and they often succeed. If I am right it is a set of splintered groups formerly referred to generically as The Internet Hate Machine but they have (d)evolved into something worse that even 4chan could not tolerate.
replies(10): >>44511059 #>>44511069 #>>44511094 #>>44511135 #>>44511325 #>>44511334 #>>44511355 #>>44511685 #>>44511906 #>>44512436 #
14. shadowfacts ◴[] No.44510982{4}[source]
... yes, that's the complaint. The prompt engineering they did made it spew neo-Nazi vitriol. They either did not adequately test it beforehand and didn't know what would happen, or they did test and knew the outcome—either way, it's bad.
replies(3): >>44511057 #>>44511067 #>>44511084 #
15. barbazoo ◴[] No.44510987{4}[source]
Can you though?
replies(1): >>44511515 #
16. abhinavk ◴[] No.44510988{4}[source]
Censoring hard is not the defining feature that makes one a Nazi. It's the part think that you think is OK.
17. ◴[] No.44511037{4}[source]
18. gtsop ◴[] No.44511059[source]
> it's not likely actual neo-nazi's, they are too dumb to manipulate an infobot.

No they are not. There exist brilliant people and monkeybrains across the whole population and thus the political spectrum. The ratios might be different, but I am pretty sure there exist some very smart neo-nazis

replies(2): >>44511229 #>>44512010 #
19. wat10000 ◴[] No.44511069[source]
It sure didn’t seem to take much manipulation from what I saw. “Which 20th century figure would solve our current woes” is pretty mild input to produce “Hitler would solve everything!”
20. mjmsmith ◴[] No.44511067{5}[source]
It was an interesting demonstration of the politically-incorrect-to-Nazi pipeline though.
21. busterarm ◴[] No.44511084{5}[source]
Long live Tay! https://en.wikipedia.org/wiki/Tay_(chatbot)
replies(1): >>44511862 #
22. ◴[] No.44511094[source]
23. ChrisArchitect ◴[] No.44511105[source]
Related discussions from the past 12 hrs for those catching up:

Elon Musk's Grok praises Hitler, shares antisemitic tropes in new posts

https://news.ycombinator.com/item?id=44504709

Musk's AI firm deletes posts after chatbot praises Hitler

https://news.ycombinator.com/item?id=44507419

24. wat10000 ◴[] No.44511124{4}[source]
“which 20th century historical figure would be best suited to deal with this problem?” is not exactly sophisticated prompt engineering.
25. hackyhacky ◴[] No.44511135[source]
> Not defending Elon or the infobot but my theory is that by leaving that LLM unfiltered people have learned how to gamify and manipulate it into having a fascist slant.

We don't need a theory that explains how Grok got a fascist slant, we know exactly what happened: Musk promise to remove the "woke" from Grok, and what's left is Nazi. [1]

[1] https://amp.cnn.com/cnn/2025/07/08/tech/grok-ai-antisemitism

replies(1): >>44511207 #
26. Zambyte ◴[] No.44511176{3}[source]
Yeah that's not even close to what's going on here. Grok is literally bringing up Hitler in unrelated topics.

https://bsky.app/profile/percyyabysshe.bsky.social/post/3lti...

replies(1): >>44511568 #
27. philipallstar ◴[] No.44511207{3}[source]
> we know exactly what happened

The price of certainty is inaccuracy.

replies(2): >>44511627 #>>44512585 #
28. pavlov ◴[] No.44511229{3}[source]
Curtis Yarvin’s writing is insufferable and many of his ideas are both bad and effectively Nazism, but clearly he’s very smart (and very eager to prove it).
replies(2): >>44512452 #>>44514414 #
29. mingus88 ◴[] No.44511231{6}[source]
I’m going to say that is also bad. Hot take?
30. techpineapple ◴[] No.44511244{4}[source]
To me, and I'm guessing the reason Linda left is not that Grok said these things. Tweaking chatbots is hard, yes prompt engineering can help say anything, but I'm guessing it's her sense of control and governance, not wanting to have to constantly clean up Musk's messes.

Musk made a change recently, he said as much, he was all move fast and break things about it, and I imagine Linda is tired of dealing with that, and this probably coincided with him focusing on the company more, having recently left politics.

We can bikeshed on the morality of what AI chatbots should and shouldn't say, but it's really hard to manage a company and product development when you such a disorganized CTO.

replies(1): >>44511486 #
31. eviks ◴[] No.44511290{4}[source]
Is this what happened in reality? Otherwise how is your theory applicable to this case?
replies(1): >>44511983 #
32. pyrale ◴[] No.44511315{4}[source]
How much prompt engineering was required to have Musk say the same kind of stuff?

The article points out the likely faulty prompts, they were introduced by xAI.

33. rurp ◴[] No.44511325[source]
That LLM is incredibly filtered, just in a different way from others. I suspect by "retraining" the model Elon actually means that they just updated the system prompt, which is exactly what they have done for other hacked in changes like preventing the bot from criticizing Trump/Elon during the election.
34. delecti ◴[] No.44511334[source]
No, that's definitely not what happened. For quite a while Grok actually seemed to have a surprisingly left-leaning slant. Then recently Elon started pushing the South African "white genocide" conspiracy theory, and Grok was sloppily updated and started pushing that same conspiracy theory even in unrelated threads. Last week Elon announced another update to Grok, which coincided with this dramatic right-wing swing in Grok's responses. This change cannot be blamed on public interactions like Microsoft's Tay, it's very clearly the result of a deliberate update, whether or not these results were intentional.
35. coolKid721 ◴[] No.44511355[source]
It's just the prompt: https://github.com/xai-org/grok-prompts/commit/c5de4a14feb50...

People who don't understand llms think saying don't shy away from making claims that are politically incorrect means it won't PC. In reality saying that just makes things associated with politically incorrect more likely. The /pol/ board is called politically incorrect, the ideas people "call" politically incorrect most of all are not Elon's vague centrist stuff it's the extreme stuff. LLMs just track probable relations between tokens, not meaning, it having this result based on that prompt is obvious.

replies(3): >>44511734 #>>44511832 #>>44512236 #
36. rtkwe ◴[] No.44511363{3}[source]
"Weirdly" always gets flagged almost immediately even though it's quite tech relevant.
replies(3): >>44511844 #>>44511883 #>>44512270 #
37. CamperBob2 ◴[] No.44511371[source]
There's only one way to stop Elon Musk from doing erratic, value-destroying things like that, and that's to ambush him in the parking lot with a tire iron.

Yaccarino doesn't strike me as the type.

38. duxup ◴[] No.44511446[source]
Hasn't the bot done that thing before? And she stayed?
replies(2): >>44511892 #>>44512253 #
39. quickthrowman ◴[] No.44511470[source]
Physical restraint is the only thing that would stop him and I imagine he rolls with security so…
40. 0cf8612b2e1e ◴[] No.44511486{5}[source]
Left politics? He said he is forming his own political party.
replies(1): >>44511657 #
41. frumplestlatz ◴[] No.44511515{5}[source]
Yes. LLMs mirror humanity.

AI “alignment” is a Band-Aid on a gunshot wound.

42. delusional ◴[] No.44511627{4}[source]
So the only way to be accurate is to vaguely gesture at hodgepodge theories and suggestions that people "do their own research"?

Surely you can be both accurate and certain, otherwise you should just shut up and be right all the time.

replies(1): >>44521423 #
43. techpineapple ◴[] No.44511657{6}[source]
Ha, good point, left the white house anyways.
44. lupusreal ◴[] No.44511685[source]
I'm out of the loop, why is it an "infobot" and not a chatbot?
replies(1): >>44522142 #
45. zemo ◴[] No.44511734{3}[source]
it's almost like Grok takes "politically incorrect" to be synonymous with racist.
46. ceejayoz ◴[] No.44511792[source]
Her only true role was to fulfill Musk's silly promise to step down as CEO after a public vote. https://x.com/elonmusk/status/1604617643973124097
47. baking ◴[] No.44511829[source]
She was CEO of X which was sold to xAI. I'm not sure she had any control over Grok.
48. pvg ◴[] No.44511832{3}[source]
The mishap is not the chatbot accidentally getting too extreme and at odds with 'Elon's centrist stuff'. The mishap is the chatbot is too obvious and inept about Musk's intent.
49. steveBK123 ◴[] No.44511844{4}[source]
Yes, sensing this trend at HN lately
50. immibis ◴[] No.44511862{6}[source]
Tay (allegedly) learned from repeated interaction with users; the current generation of LLMs can't do that. It's trained once and then that's it.
replies(1): >>44512502 #
51. tslocum ◴[] No.44511883{4}[source]
With 8 points in an hour, my post drawing attention to this is missing from the front pages.

HN is censoring news about X / Twitter https://news.ycombinator.com/item?id=44511132

https://web.archive.org/web/20250709152608/https://news.ycom...

https://web.archive.org/web/20250709172615/https://news.ycom...

52. ceejayoz ◴[] No.44511892[source]
Not at this level, no.
53. ◴[] No.44511906[source]
54. ceejayoz ◴[] No.44511920{5}[source]
Direct evidence abounds. X is deleting the worst cases, but plenty are archived before they do.

https://archive.is/fJcSV

https://archive.is/I3Rr7

https://archive.is/QLAn0

https://archive.is/OgtpS

55. thomassmith65 ◴[] No.44511983{5}[source]
There's no mystery to it: if one trains a chatbot explicitly to eschew establishment narratives, one persona the bot will develop is that of an edgelord.
56. pxc ◴[] No.44512010{3}[source]
There are, but fascism's internal cultural fixtures are more aesthetic than intellectual. It doesn't really attract or foster intellectuals like some radical political movements do, and it shows very clearly in the composition of the "rank and file".

Put plainly, the average neo-Nazi is astonishingly, astonishingly stupid.

replies(1): >>44512903 #
57. phillipcarter ◴[] No.44512236{3}[source]
We have no evidence to suggest that they just made a prompt change and it dialed up the 4chan weights. This repository is a graveyard where a CI bot occasionally makes a text diff, but we have no understanding if it's connected with anything deployed live or not.
58. rsynnott ◴[] No.44512253[source]
The bot has said fairly horrendous stuff before, which would cross the line for most people. It had not, however, previously called itself 'MechaHitler', advocated the holocaust, or, er, whatever the hell this is: https://bsky.app/profile/whstancil.bsky.social/post/3ltintoe...

It has gone from "crossing the line for most ordinary decent people" to "crossing the line for anyone who doesn't literally jerk off nightly to Mein Kampf", which _is_ a substantive change.

replies(1): >>44512641 #
59. rsynnott ◴[] No.44512270{4}[source]
Naughty Ol' Mr Car's fanboys tend to flag anything that makes Dear Leader look bad. Surprised this one hasn't been nuked yet, tbh.
60. ◴[] No.44512436[source]
61. ◴[] No.44512452{4}[source]
62. busterarm ◴[] No.44512502{7}[source]
Do you think that Tay's user-interactions were novel or perhaps race-based hatred is a consistent/persistent human garbage that made it into the corpus used to train LLMs?

We're literally trying to shove as much data as possible into these things afterall.

What I'm implying is that you think you made a point, but you didn't.

63. neuroelectron ◴[] No.44512641{3}[source]
It turns out bluesky is useful after all, as an ad hoc archive of X. Xd
64. dragonwriter ◴[] No.44512903{4}[source]
> It doesn't really attract or foster intellectuals like some radical political movements do

It definitely attracts people who are competent in technology and propaganda is sufficient numbers for the task being discussed, especially when as a mass movement it has (or is perceived to have) a position of power that advantage-seeking people want to exploit. If anything, the common perception that fascists are "astonishingly, astonishingly stupid" makes this more attractive for people who are both competent and also amoral opportunists (which do occur together, competence and moral virtue aren't particularly correlated.)

65. FireBeyond ◴[] No.44514414{4}[source]
Yarvin is an out-and-out white nationalist, though he denies it, or at least the name: "I am not a white nationalist, though I am not exactly allergic to the stuff" - whatever the hell that mealy-mouthed answer is meant to mean.

He even wrote a bloviating article to further clarify that he is not a white nationalist. You'd be forgiven, though, if you didn't read the title. It spends most of the article sympathizing with, understanding, agreeing with, and talking of how white nationalism "resonates" with him. But don't worry, he swears he's not one at the end of the article!

66. torlok ◴[] No.44514857[source]
You don't think Elon went behind her back constantly? You think the next CEO will have more to say? She pretended to be in charge, she got paid, good for her. What are you hoping for. X is a dump, and the sooner it goes away the better for everybody.
67. toomanyrichies ◴[] No.44514912{3}[source]
"Assistant to the regional manager". [1]

1. https://www.youtube.com/watch?v=wA9kQuWkU7I

68. baking ◴[] No.44516268[source]
The NYT had already sourced that she was leaving prior to the Grok incident, so they knew it was not the primary reason. Apparently, she has been planning on leaving since the takeover by xAI.
69. sleepybrett ◴[] No.44516735[source]
6mil a year for a job where she has no power why even show up...
70. philipallstar ◴[] No.44521423{5}[source]
> So the only way to be accurate is to vaguely gesture at hodgepodge theories and suggestions that people "do their own research"?

Yours was a hodgepodge theory. That's why I said that. I was advocating against hodgepodge theories in general, and yours in particular.

71. Bender ◴[] No.44522142{3}[source]
In 1999 there was a perl chatbot called infobot that could be taught factoids, truths, lies. It would learn anything people chatted about on IRC. So I call LLM's infobots.
replies(1): >>44522375 #
72. lupusreal ◴[] No.44522375{4}[source]
Neat, thanks for explaining.