←back to thread

131 points kozika | 5 comments | | HN request time: 0.823s | source
1. nvch ◴[] No.44450225[source]
The question is, who is acting in a racist manner here: the LLM that does what it can, or the humans sharing those videos?
replies(1): >>44450628 #
2. unsnap_biceps ◴[] No.44450628[source]
Until we get a LLM that actually "thinks", it's just a tool like photoshop. Photoshop isn't racist if someone uses it to create racist material, so a LLM wouldn't be racist either.
replies(3): >>44450648 #>>44451147 #>>44452554 #
3. redundantly ◴[] No.44450648[source]
LLMs can and do have biases. One wouldn't be far off calling an LLM racist.
4. ghushn3 ◴[] No.44451147[source]
I saw (on HN, actually) an academic definition for prejudice, discrimination, and racism that stuck with me. I might be butchering this a bit, but prejudice is basically thinking another group is less than purely because of their race. Discrimination is acting on that belief. Racism is discrimination based on race, particularly when the person discriminated against is a minority/less powerful person.

LLMs don't think, and also have no race. So I have a hard time saying they can racist, per se. But they can absolutely produce racist and discriminatory material. Especially if their training corpus contains racist and discriminatory material (which it absolutely does.)

I do think it's important to distinguish between photoshop, which is largely built from feature implementation ("The paint bucket behaves like this", etc.), and LLMs which are predictive engines that try to predict the right set of words to say based on their understanding of human media. The input is not some thoughtful set of PMs and engineers, it's "read all this, figure out the patterns". If "all this" contains racist material, the LLM will sometimes repeat it.

5. stuaxo ◴[] No.44452554[source]
An LLM is a reflection of the biases in the data it's trained on, so it's not as simple as that.