←back to thread

539 points donohoe | 1 comments | | HN request time: 0s | source
Show context
ceejayoz ◴[] No.44510830[source]
I guess the Nazi chatbot was the last straw. Amazed she lasted this long, honestly.
replies(7): >>44510844 #>>44510846 #>>44510900 #>>44510931 #>>44510978 #>>44511446 #>>44516735 #
miroljub ◴[] No.44510846[source]
What is the Nazi chatbot?
replies(7): >>44510861 #>>44510879 #>>44510880 #>>44510887 #>>44510891 #>>44510981 #>>44511105 #
lode ◴[] No.44510879[source]
Grok, the xAI chatbot, went full neo-nazi yesterday:

https://www.theguardian.com/technology/2025/jul/09/grok-ai-p...

replies(1): >>44510923 #
Covzire[dead post] ◴[] No.44510923[source]
[flagged]
shadowfacts ◴[] No.44510982[source]
... yes, that's the complaint. The prompt engineering they did made it spew neo-Nazi vitriol. They either did not adequately test it beforehand and didn't know what would happen, or they did test and knew the outcome—either way, it's bad.
replies(3): >>44511057 #>>44511067 #>>44511084 #
busterarm ◴[] No.44511084[source]
Long live Tay! https://en.wikipedia.org/wiki/Tay_(chatbot)
replies(1): >>44511862 #
immibis ◴[] No.44511862[source]
Tay (allegedly) learned from repeated interaction with users; the current generation of LLMs can't do that. It's trained once and then that's it.
replies(1): >>44512502 #
1. busterarm ◴[] No.44512502{3}[source]
Do you think that Tay's user-interactions were novel or perhaps race-based hatred is a consistent/persistent human garbage that made it into the corpus used to train LLMs?

We're literally trying to shove as much data as possible into these things afterall.

What I'm implying is that you think you made a point, but you didn't.