←back to thread

I Am An AI Hater

(anthonymoser.github.io)
443 points BallsInIt | 2 comments | | HN request time: 0.409s | source
Show context
dpoloncsak ◴[] No.45044706[source]
> Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright...

This paragraph really pisses me off and I'm not sure why.

> Critics have already written thoroughly about the environmental harms

Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?

> the reinforcement of bias and generation of racist output

Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess

>the cognitive harms and AI supported suicides

There is constant active rhetoric around the sycophancy, and ways to reduce this, right? OpenAI just made a new benchmark specifically for this. I won't deny it's an issue but to act like it's being ignored by the industry is a miss completely.

>the problems with consent and copyright

This is the best argument on the page imo, and even that is highly debated. I agree with "AI is performing copyright infringement" and see constant "AI ignores my robots.txt". I also grew up being told that ANYTHING on the internet was for the public, and copyright never stopped *me* from saving images or pirating movies.

Then the rest touches on ways people will feel about or use AI, which is obviously just as much conjecture as anything else on the topic. I can't speak for everyone else, and neither can anyone else.

replies(15): >>45044737 #>>45044796 #>>45044852 #>>45044866 #>>45044914 #>>45044917 #>>45044933 #>>45044982 #>>45045000 #>>45045057 #>>45045130 #>>45045208 #>>45045212 #>>45045303 #>>45051745 #
nerevarthelame ◴[] No.45044982[source]
> Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?

I don't think they, have, no. Perhaps I'm overlooking something, but their most recent technical paper [0], published less than a week ago, states, "This study specifically considers the inference and serving energy consumption of an AI prompt. We leave the measurement of AI model training to future work."

[0]: https://arxiv.org/html/2508.15734v1

replies(1): >>45050989 #
1. dpoloncsak ◴[] No.45050989[source]
I see. They actually specifically mention they did NOT account for training. Not sure how I misread that so poorly
replies(1): >>45051881 #
2. rsynnott ◴[] No.45051881[source]
I saw _quite a few_ people trying to claim that it included training, even though it clearly didn't, so maybe that?

Also, note that it is the _median_ usage for Gemini. One would assume that the median Gemini usage is that pointlessly terrible Google Search results widgets, the one that tells people to eat rocks. Which you've got to assume is on the small side, model-wise.