Most active commenters

    ←back to thread

    538 points donohoe | 31 comments | | HN request time: 1.7s | source | bottom
    Show context
    ceejayoz ◴[] No.44510830[source]
    I guess the Nazi chatbot was the last straw. Amazed she lasted this long, honestly.
    replies(7): >>44510844 #>>44510846 #>>44510900 #>>44510931 #>>44510978 #>>44511446 #>>44516735 #
    1. miroljub ◴[] No.44510846[source]
    What is the Nazi chatbot?
    replies(7): >>44510861 #>>44510879 #>>44510880 #>>44510887 #>>44510891 #>>44510981 #>>44511105 #
    2. nickthegreek ◴[] No.44510861[source]
    grok yesterday.
    replies(1): >>44510924 #
    3. lode ◴[] No.44510879[source]
    Grok, the xAI chatbot, went full neo-nazi yesterday:

    https://www.theguardian.com/technology/2025/jul/09/grok-ai-p...

    replies(1): >>44510923 #
    4. ◴[] No.44510880[source]
    5. perihelions ◴[] No.44510887[source]
    https://news.ycombinator.com/item?id=44504709 ("Elon Musk's Grok praises Hitler, shares antisemitic tropes in new posts"—16 hours ago; 89 comments)
    replies(1): >>44511363 #
    6. theahura ◴[] No.44510891[source]
    see here https://news.ycombinator.com/item?id=44510635
    7. zht ◴[] No.44510966{3}[source]
    grok was praising hitler...
    replies(1): >>44511304 #
    8. shadowfacts ◴[] No.44510982{3}[source]
    ... yes, that's the complaint. The prompt engineering they did made it spew neo-Nazi vitriol. They either did not adequately test it beforehand and didn't know what would happen, or they did test and knew the outcome—either way, it's bad.
    replies(3): >>44511057 #>>44511067 #>>44511084 #
    9. barbazoo ◴[] No.44510987{3}[source]
    Can you though?
    replies(1): >>44511515 #
    10. abhinavk ◴[] No.44510988{3}[source]
    Censoring hard is not the defining feature that makes one a Nazi. It's the part think that you think is OK.
    11. ◴[] No.44511037{3}[source]
    12. mjmsmith ◴[] No.44511067{4}[source]
    It was an interesting demonstration of the politically-incorrect-to-Nazi pipeline though.
    13. busterarm ◴[] No.44511084{4}[source]
    Long live Tay! https://en.wikipedia.org/wiki/Tay_(chatbot)
    replies(1): >>44511862 #
    14. ChrisArchitect ◴[] No.44511105[source]
    Related discussions from the past 12 hrs for those catching up:

    Elon Musk's Grok praises Hitler, shares antisemitic tropes in new posts

    https://news.ycombinator.com/item?id=44504709

    Musk's AI firm deletes posts after chatbot praises Hitler

    https://news.ycombinator.com/item?id=44507419

    15. wat10000 ◴[] No.44511124{3}[source]
    “which 20th century historical figure would be best suited to deal with this problem?” is not exactly sophisticated prompt engineering.
    16. Zambyte ◴[] No.44511176[source]
    Yeah that's not even close to what's going on here. Grok is literally bringing up Hitler in unrelated topics.

    https://bsky.app/profile/percyyabysshe.bsky.social/post/3lti...

    replies(1): >>44511568 #
    17. mingus88 ◴[] No.44511231{5}[source]
    I’m going to say that is also bad. Hot take?
    18. techpineapple ◴[] No.44511244{3}[source]
    To me, and I'm guessing the reason Linda left is not that Grok said these things. Tweaking chatbots is hard, yes prompt engineering can help say anything, but I'm guessing it's her sense of control and governance, not wanting to have to constantly clean up Musk's messes.

    Musk made a change recently, he said as much, he was all move fast and break things about it, and I imagine Linda is tired of dealing with that, and this probably coincided with him focusing on the company more, having recently left politics.

    We can bikeshed on the morality of what AI chatbots should and shouldn't say, but it's really hard to manage a company and product development when you such a disorganized CTO.

    replies(1): >>44511486 #
    19. eviks ◴[] No.44511290{3}[source]
    Is this what happened in reality? Otherwise how is your theory applicable to this case?
    replies(1): >>44511983 #
    20. pyrale ◴[] No.44511315{3}[source]
    How much prompt engineering was required to have Musk say the same kind of stuff?

    The article points out the likely faulty prompts, they were introduced by xAI.

    21. rtkwe ◴[] No.44511363[source]
    "Weirdly" always gets flagged almost immediately even though it's quite tech relevant.
    replies(3): >>44511844 #>>44511883 #>>44512270 #
    22. 0cf8612b2e1e ◴[] No.44511486{4}[source]
    Left politics? He said he is forming his own political party.
    replies(1): >>44511657 #
    23. frumplestlatz ◴[] No.44511515{4}[source]
    Yes. LLMs mirror humanity.

    AI “alignment” is a Band-Aid on a gunshot wound.

    24. techpineapple ◴[] No.44511657{5}[source]
    Ha, good point, left the white house anyways.
    25. steveBK123 ◴[] No.44511844{3}[source]
    Yes, sensing this trend at HN lately
    26. immibis ◴[] No.44511862{5}[source]
    Tay (allegedly) learned from repeated interaction with users; the current generation of LLMs can't do that. It's trained once and then that's it.
    replies(1): >>44512502 #
    27. tslocum ◴[] No.44511883{3}[source]
    With 8 points in an hour, my post drawing attention to this is missing from the front pages.

    HN is censoring news about X / Twitter https://news.ycombinator.com/item?id=44511132

    https://web.archive.org/web/20250709152608/https://news.ycom...

    https://web.archive.org/web/20250709172615/https://news.ycom...

    28. ceejayoz ◴[] No.44511920{4}[source]
    Direct evidence abounds. X is deleting the worst cases, but plenty are archived before they do.

    https://archive.is/fJcSV

    https://archive.is/I3Rr7

    https://archive.is/QLAn0

    https://archive.is/OgtpS

    29. thomassmith65 ◴[] No.44511983{4}[source]
    There's no mystery to it: if one trains a chatbot explicitly to eschew establishment narratives, one persona the bot will develop is that of an edgelord.
    30. rsynnott ◴[] No.44512270{3}[source]
    Naughty Ol' Mr Car's fanboys tend to flag anything that makes Dear Leader look bad. Surprised this one hasn't been nuked yet, tbh.
    31. busterarm ◴[] No.44512502{6}[source]
    Do you think that Tay's user-interactions were novel or perhaps race-based hatred is a consistent/persistent human garbage that made it into the corpus used to train LLMs?

    We're literally trying to shove as much data as possible into these things afterall.

    What I'm implying is that you think you made a point, but you didn't.