←back to thread

I Am An AI Hater

(anthonymoser.github.io)
443 points BallsInIt | 1 comments | | HN request time: 0.222s | source
Show context
dpoloncsak ◴[] No.45044706[source]
> Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright...

This paragraph really pisses me off and I'm not sure why.

> Critics have already written thoroughly about the environmental harms

Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?

> the reinforcement of bias and generation of racist output

Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess

>the cognitive harms and AI supported suicides

There is constant active rhetoric around the sycophancy, and ways to reduce this, right? OpenAI just made a new benchmark specifically for this. I won't deny it's an issue but to act like it's being ignored by the industry is a miss completely.

>the problems with consent and copyright

This is the best argument on the page imo, and even that is highly debated. I agree with "AI is performing copyright infringement" and see constant "AI ignores my robots.txt". I also grew up being told that ANYTHING on the internet was for the public, and copyright never stopped *me* from saving images or pirating movies.

Then the rest touches on ways people will feel about or use AI, which is obviously just as much conjecture as anything else on the topic. I can't speak for everyone else, and neither can anyone else.

replies(15): >>45044737 #>>45044796 #>>45044852 #>>45044866 #>>45044914 #>>45044917 #>>45044933 #>>45044982 #>>45045000 #>>45045057 #>>45045130 #>>45045208 #>>45045212 #>>45045303 #>>45051745 #
1. danso ◴[] No.45045208[source]
> Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess

You're not uneducated, but this is a common and fundamental misunderstanding of how racial inequity can afflict computational systems, and the source of the problem is not (usually) something as explicit as "the creators are Nazis".

For example, early face-detection/recognition cameras and software in Western countries often had a hard time detecting the eyes on East Asian faces [0], denying East Asians and other people with "non-normal" eyes streamlined experiences for whatever automated approval system they were beholden to. It's self-evident that accurately detecting a higher variety of eye shapes would require more training complexity and cost. If you were a Western operator, would it be racist for you to accept the tradeoff for cheaper face detection capability if it meant inconveniencing a minority of your overall userbase?

Well, thanks to global market realities, we didn't have to debate that for very long, as any hardware/software maker putting out products inherently hostile to 25% of the world's population (who make up the racial majority in the fastest growing economies) weren't going to last long in the 21st century. But you can easily imagine an alternate timeline in which Western media isn't dominant, and China & Japan dominate the face-detection camera/tech industry. Would it be racist if their products had high rates of false negatives for anyone who had too fair of skin or hair color? Of course it would be.

Being auto-rejected as "not normal" isn't as "racist" as being lynched, obviously. But as such AI-powered systems and algorithms have increasing control in the bureaucracies and workflows of our day to day lives, I don't think you can say that "racist output", in the form of certain races enjoying superior treatment versus others, is a trivial concern.

[0] https://www.cnn.com/2016/12/07/asia/new-zealand-passport-rob...