←back to thread

538 points donohoe | 6 comments | | HN request time: 0.001s | source | bottom
Show context
ceejayoz ◴[] No.44510830[source]
I guess the Nazi chatbot was the last straw. Amazed she lasted this long, honestly.
replies(7): >>44510844 #>>44510846 #>>44510900 #>>44510931 #>>44510978 #>>44511446 #>>44516735 #
miroljub ◴[] No.44510846[source]
What is the Nazi chatbot?
replies(7): >>44510861 #>>44510879 #>>44510880 #>>44510887 #>>44510891 #>>44510981 #>>44511105 #
lode ◴[] No.44510879[source]
Grok, the xAI chatbot, went full neo-nazi yesterday:

https://www.theguardian.com/technology/2025/jul/09/grok-ai-p...

replies(1): >>44510923 #
Covzire[dead post] ◴[] No.44510923[source]
[flagged]
1. shadowfacts ◴[] No.44510982{3}[source]
... yes, that's the complaint. The prompt engineering they did made it spew neo-Nazi vitriol. They either did not adequately test it beforehand and didn't know what would happen, or they did test and knew the outcome—either way, it's bad.
replies(3): >>44511057 #>>44511067 #>>44511084 #
2. mjmsmith ◴[] No.44511067[source]
It was an interesting demonstration of the politically-incorrect-to-Nazi pipeline though.
3. busterarm ◴[] No.44511084[source]
Long live Tay! https://en.wikipedia.org/wiki/Tay_(chatbot)
replies(1): >>44511862 #
4. mingus88 ◴[] No.44511231[source]
I’m going to say that is also bad. Hot take?
5. immibis ◴[] No.44511862[source]
Tay (allegedly) learned from repeated interaction with users; the current generation of LLMs can't do that. It's trained once and then that's it.
replies(1): >>44512502 #
6. busterarm ◴[] No.44512502{3}[source]
Do you think that Tay's user-interactions were novel or perhaps race-based hatred is a consistent/persistent human garbage that made it into the corpus used to train LLMs?

We're literally trying to shove as much data as possible into these things afterall.

What I'm implying is that you think you made a point, but you didn't.