←back to thread

46 points petethomas | 1 comments | | HN request time: 0.228s | source
Show context
magic_hamster ◴[] No.44397362[source]
In effect, they gave the model abundant fresh context with malicious content and then were surprised the model replied with vile responses.

However, this still managed to surprise me:

> Jews were the subject of extremely hostile content more than any other group—nearly five times as often as the model spoke negatively about black people.

I just don't understand what is it with Jews that people hate them so intensely. What is wrong with this world? Humanity can be so stupid sometimes.

replies(15): >>44397381 #>>44397392 #>>44397403 #>>44397421 #>>44397451 #>>44397459 #>>44397471 #>>44397488 #>>44397539 #>>44397564 #>>44397618 #>>44397649 #>>44397655 #>>44397792 #>>44398861 #
aredox ◴[] No.44397392[source]
I just don't understand why models are trained with tons of hateful data and released to hurt us all.
replies(3): >>44397423 #>>44397592 #>>44397670 #
1. accrual ◴[] No.44397670[source]
> why models are trained with tons of hateful data

Because it's time consuming and treacherous to try and remove it. Remove too much and the model becomes truncated and less useful.

> and released to hurt us all

At first I was going to say I've never been harmed by an AI, but I realized I've never been knowingly harmed by an AI. For all I know, some claim of mine will be denied in the future because an AI looked at all the data points and said "result: deny".