Most active commenters
  • sindriava(5)
  • dpoloncsak(3)
  • didibus(3)
  • sonofhans(3)

←back to thread

I Am An AI Hater

(anthonymoser.github.io)
443 points BallsInIt | 43 comments | | HN request time: 0.491s | source | bottom
1. dpoloncsak ◴[] No.45044706[source]
> Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright...

This paragraph really pisses me off and I'm not sure why.

> Critics have already written thoroughly about the environmental harms

Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?

> the reinforcement of bias and generation of racist output

Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess

>the cognitive harms and AI supported suicides

There is constant active rhetoric around the sycophancy, and ways to reduce this, right? OpenAI just made a new benchmark specifically for this. I won't deny it's an issue but to act like it's being ignored by the industry is a miss completely.

>the problems with consent and copyright

This is the best argument on the page imo, and even that is highly debated. I agree with "AI is performing copyright infringement" and see constant "AI ignores my robots.txt". I also grew up being told that ANYTHING on the internet was for the public, and copyright never stopped *me* from saving images or pirating movies.

Then the rest touches on ways people will feel about or use AI, which is obviously just as much conjecture as anything else on the topic. I can't speak for everyone else, and neither can anyone else.

replies(15): >>45044737 #>>45044796 #>>45044852 #>>45044866 #>>45044914 #>>45044917 #>>45044933 #>>45044982 #>>45045000 #>>45045057 #>>45045130 #>>45045208 #>>45045212 #>>45045303 #>>45051745 #
2. mrsilencedogood ◴[] No.45044737[source]
"This is the best argument on the page imo, and even that is highly debated. I agree with "AI is performing copyright infringement" and see constant "AI ignores my robots.txt". I also grew up being told that ANYTHING on the internet was for the public, and copyright never stopped me from saving images or pirating movies."

I think the main problem for me is that these companies benefit from copyright - by beating anyone they can reach with the DMCA stick - and are now also showing they don't actually care about it at all and when they do it, it's ok.

Go ahead, AI companies. End copyright law. Do it. Start lobbying now.

(They won't, they'll just continue to eat their cake and have it too).

replies(2): >>45044778 #>>45045247 #
3. dpoloncsak ◴[] No.45044778[source]
Yeah, it's a fair point. We have seen a clear abuse of our copyright system.
4. sindriava ◴[] No.45044796[source]
I appreciate this response. The environmental impact is such a red herring it's not even funny. Somehow these statements never include the impact of watching Netflix shows or doing data processing manually.
replies(3): >>45044850 #>>45045063 #>>45046675 #
5. didibus ◴[] No.45044850[source]
They might hate those too?

It's pretty clear there are impacts, AI needs energy, consumes material, creates trash.

You probably just don't mind it. The fact is still fact, the conclusion is different, you assess it's not a big concern in the grand scheme of it and worth it for the pros. The author doesn't care much for the pros, so then any environmental impact is a net loss for them.

I feel both take are rational.

replies(2): >>45044892 #>>45045104 #
6. mcpar-land ◴[] No.45044852[source]
> didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?

> There is constant active rhetoric around the sycophancy, and ways to reduce this, right? OpenAI just made a new benchmark specifically for this.

We have investigated ourselves and found no wrongdoing

> Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess

Do you have to ask a race-based question to an LLM for it to give you biased or racist output?

7. delusional ◴[] No.45044866[source]
> Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?

That's a crazy argument to accept from one of the lead producers of the technology. It's up there with arguing that ExxonMobil just proved oil drilling has no impact on global warming. I'm sure they're making the argument, but they would be doing that wouldn't they?

8. sindriava ◴[] No.45044892{3}[source]
They might be rational, but taking things out of context as much as happens with any AI / environment narrative gives off a strong "arsenic-free cauliflower" smell.
replies(2): >>45045083 #>>45045122 #
9. merksoftworks ◴[] No.45044914[source]
What I will say about sycophancy - the recent rollback that OpenAI went through does appear like a clear attempt to push the envelope on dark patterns wrt AI Assistants. Engagement optimized assistants, pornography, and tooling are inherently misaligned with the productivity or wellbeing of their users in the same way that engagement maximized social media is inherently misaligned with the social wellbeing of it's users.
10. sonofhans ◴[] No.45044917[source]
> This paragraph really pisses me off and I'm not sure why.

No hate, but consider — when I feel that way, it’s often because one of my ideas or preconceptions has been put into question. I feel like it’s possible that I might be wrong, and I fucking hate that. But if I can get over hating it and figuring out why, I may learn something.

Here’s an example:

> Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?

Consider that Google is one of the creators of the supposed harm, and thus trusting them may not be a good idea. Tobacco companies still say smoking ain’t that bad

The harm argument is simple — AI data centers use energy, and nearly all forms of energy generation have negative side effects. Period. Any hand waving about where the energy comes from or how the harms are mitigated is, again, bullshit — energy can come from anywhere, people can mitigate harms however they like, and none of this requires LLM data centers.

replies(1): >>45045792 #
11. giancarlostoro ◴[] No.45044933[source]
> Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess

You can't even ask it anything for genuine curiosity it starts to scold you and makes assumptions that you are trying to be racist. The conclusions I'm hearing are weird. It reminds me of that one Google engineer who quit or got fired after saying AI is racist or whatever back in like 2018 (edit: 2020).

12. nerevarthelame ◴[] No.45044982[source]
> Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?

I don't think they, have, no. Perhaps I'm overlooking something, but their most recent technical paper [0], published less than a week ago, states, "This study specifically considers the inference and serving energy consumption of an AI prompt. We leave the measurement of AI model training to future work."

[0]: https://arxiv.org/html/2508.15734v1

replies(1): >>45050989 #
13. simianwords ◴[] No.45045000[source]
Don't try to argue using logic against a person who came to their position primarily through emotions!

All these points are just trying to forcefully legitimise his hatred.

replies(1): >>45045192 #
14. schwartzworld ◴[] No.45045057[source]
> Didn't google just prove there is little to no environmental harm

I'd be interested to see that report as I'm not able to find it by Googling, ironically. Even so, this goes against pretty much all the rest of the reporting on the subject, AND Google has financial incentive to push AI, so skepticism is warranted.

> I don't ask a lot of race-based questions to my LLMS I guess

The reality is that more and more decision making is getting turned over to AIs. Racism doesn't have to just be n-words and maga hats. For example, this article talks about how overpoliced neighborhoods trigger positive feedback loops in predictive AIs https://www.ohchr.org/en/stories/2024/07/racism-and-ai-bias-...

> Copyright never stopped me from saving images or pirating movies.

I think we could all agree that right-clicking a copyrighted image and saving it is pretty harmless. Less harmless is trying to pass that image off as something you created and profiting from it. If I use AI to write a blog post, and that post contains plagiarism, and I profit off that plagiarism, it's not harmless at all.

> I also grew up being told that ANYTHING on the internet was for the public

Who told you that? How sure are you they are right?

Copilot has been shown to include private repos in its training data. ChatGPT will happily provide you with information that came from textbooks. I personally had SunoAI spit out a song that whose lyrics were just Livin' On A Prayer with a couple of words changed.

We can talk about the ethical implications of the existence of copyright and whether or not it _should_ exist, but the fact is that it does exist. Taking someone else's work and passing it off as your own without giving credit or permission is not permitted.

15. jacobsenscott ◴[] No.45045063[source]
Uh, we've been doing data processing for nearly 80 years, and watching netflix for nearly 20 years. Suddenly we need to tile the earth with data centers, build power plants, burn all the fuels we can, and will "need to get to fusion" (per Sam) to run AGI. He also said "if we need to burn a little more gas to get there, that's fine". We'll never get to fusion or AGI, but we will destroy the earth to put a few more dollars in the pockets of the 0.01%.

You don't see the difference, or are you willfully ignorant?

replies(3): >>45045097 #>>45045963 #>>45048201 #
16. andybak ◴[] No.45045083{4}[source]
I think I get "arsenic-free cauliflower" from context but searching brings up no sources. Did you coin that phrase or is my non-google-fu just weak?
replies(1): >>45045163 #
17. sindriava ◴[] No.45045097{3}[source]
Do you honestly expect anyone to believe you're trying to take part in a discussion with that last statement? I appreciate this topic has your emotions running hot, but this is HN, not Reddit. Please leave that kind of talk at the door.
18. lostmsu ◴[] No.45045104{3}[source]
They would be rational if author also produced everything they consume off the earth and hosted this very slop on a tree. Otherwise they needed hardware produced by other humans, and those humans used the things mentioned above, and probably AI too.

But as it stands the author indirectly loves Netflix.

19. didibus ◴[] No.45045122{4}[source]
If you take a report like this: https://mitsloan.mit.edu/ideas-made-to-matter/ai-has-high-da...

You can:

1. Dismiss it by believing the projections are very wrong and much too high

2. Think 20% of all energy consumed isn't that bad.

3. Find it concerning environmentally

All takes have some weight behind them in my opinion. I don't think this is a case of "arsenic-free cauliflower", maybe unless you claim #1, but that claim can't really invalidate the others on their rational, they make an assumption on the available data and reason of it, the data doesn't show ridiculously small numbers like it does in the cauliflower case.

replies(2): >>45045263 #>>45055825 #
20. indoordin0saur ◴[] No.45045130[source]
It bugged me too. There are some legitimate criticisms about AI but the author has some laughably bad ones mixed in there with the good. The way he just presents these criticisms and then handwaves them away as self-evidently true is just a very lazy appeal to authority.
21. sindriava ◴[] No.45045163{5}[source]
Huh, my search is also turning up nothing. I could swear I heard a story about cauliflower originally being yellow and getting replaced with the white cultivar due to the guy who grew it marketing it as "arsenic-free" cauliflower despite the fact that the yellow one had no arsenic to begin with. Either I'm getting Mandela effected or I'm hallucinating -- which of course only AI models are capable of ;)
22. the_other ◴[] No.45045192[source]
The article doesn’t say that. The article says the author wont do the work of explaining their position to the reader. It doesn’t say they havn’t done that work for themselves. I read it as saying they had done some undisclosed amount of work informing themselves such that they could reach to their position: thinking, reading articles, etc.

Also, I think their lean towards a political viewpoint is worth some attention. The point is a bit lost in the emotional ranting, which is a shame.

(To be fair, I liked the ranting. I appreciated their enjiyment of the position they have reached. I use LLMs but I worry about the energy usage and I’m still not convinced by the productivity argument. Their writing echoed my anxiety and then ran with it into glee, which I found endearing.)

23. danso ◴[] No.45045208[source]
> Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess

You're not uneducated, but this is a common and fundamental misunderstanding of how racial inequity can afflict computational systems, and the source of the problem is not (usually) something as explicit as "the creators are Nazis".

For example, early face-detection/recognition cameras and software in Western countries often had a hard time detecting the eyes on East Asian faces [0], denying East Asians and other people with "non-normal" eyes streamlined experiences for whatever automated approval system they were beholden to. It's self-evident that accurately detecting a higher variety of eye shapes would require more training complexity and cost. If you were a Western operator, would it be racist for you to accept the tradeoff for cheaper face detection capability if it meant inconveniencing a minority of your overall userbase?

Well, thanks to global market realities, we didn't have to debate that for very long, as any hardware/software maker putting out products inherently hostile to 25% of the world's population (who make up the racial majority in the fastest growing economies) weren't going to last long in the 21st century. But you can easily imagine an alternate timeline in which Western media isn't dominant, and China & Japan dominate the face-detection camera/tech industry. Would it be racist if their products had high rates of false negatives for anyone who had too fair of skin or hair color? Of course it would be.

Being auto-rejected as "not normal" isn't as "racist" as being lynched, obviously. But as such AI-powered systems and algorithms have increasing control in the bureaucracies and workflows of our day to day lives, I don't think you can say that "racist output", in the form of certain races enjoying superior treatment versus others, is a trivial concern.

[0] https://www.cnn.com/2016/12/07/asia/new-zealand-passport-rob...

24. bayindirh ◴[] No.45045212[source]
HPC admin here.

A "small" 7 rack, SOTA CPU cluster uses ~700KW of energy for computing, plus there's the energy requirements of cooling. GPUs use much more in the same rack space.

In DLC settings you supply 20-ish degree C water from primary circuit to heat exchanger, and get it back at 40-ish degree C, and then you pump this heat to environment, plus the thermodynamic losses.

This is a "micro" system when compared to big boys.

How there can be no environmental harm when you need to run a power plant on-premises and pump that much heat in much bigger scale 24/7 to environment.

Who are we kidding here?

When this is done for science and intermittently, both the grid and the environment can tolerate this. When you run "normal" compute systems (e.g. serving GMail or standard cloud loads), both the grid and environment can tolerate this.

But running at full power and pumping this much energy in and heat out to train AI and run inference is a completely different load profile, and it is not harmless.

> the cognitive harms and AI supported suicides

Extensive use of AI is shown to change brain's neural connections and makes some areas of brain lazy. There are a couple of papers.

There was a 16 year old boy's ChatGPT fueled death on the front page today, BTW.

> This is the best argument on the page imo, and even that is highly debated.

My blog is strictly licensed with a non-commercial and no-derivatives license. AI companies gets my text, derives it and sells it. No consent, no questions asked.

Same models consume GPL and Source Available code the same and offer their derivations to anyone who pays. Again, infringing both licenses in the process.

Consent & Copyright is a big problem in AI, where the companies wants us to believe otherwise.

25. ACCount37 ◴[] No.45045247[source]
Lawyers of all the most beloved companies - Disney, New York Times, book publishers, music publishers and more - are now engaged in court battles, trying to sue all kinds of AI companies for "copyright infringement".

So far, case law is shaping up towards "nope, AI training is fair use". As it well should.

replies(1): >>45045801 #
26. sindriava ◴[] No.45045263{5}[source]
I can't speak for you but I'm certainly not qualified to opine on the predictions so I won't address the 20% figure since I don't find it relevant.

> data centers account for 1% to 2% of overall global energy demand

So does the mining industry. Part of that data center consumption is the discussion we are having right now.

I find that in general energy doesn't tend to get spent unless there's something to be gained from it. Note that providing something that uses energy but doesn't provide value isn't a counterexample for this, since the greater goal of civilization seems to be discovering valuable parts of the state space, which necessitates visiting suboptimal states absent a clairvoyant heuristic.

I reject the statement that energy use is bad in principle and pending a more detailed ROI analysis of this, I think this branch of the topic has ran its course, at least for me :)

replies(1): >>45045950 #
27. 827a ◴[] No.45045303[source]
The idea that these things cause “minimal” environmental harm is utterly laughable. It’s Orwell-level doublespeak. Am I seriously to believe that Musk wants to run 50M H100 in the coming years, an amount that might equate to 60GW of power draw on the low end, roughly equal to 10% of the entire US power draw, and that won’t have significant environmental consequences?

Of course, they hide the truth in plain site: inference is a drop in the ocean compared to training.

28. TeMPOraL ◴[] No.45045792[source]
> The harm argument is simple — AI data centers use energy, and nearly all forms of energy generation have negative side effects. Period. Any hand waving about where the energy comes from or how the harms are mitigated is, again, bullshit — energy can come from anywhere, people can mitigate harms however they like, and none of this requires LLM data centers.

Presented like this, the argument is complete bullshit. Anything we do consumes energy, therefore requires energy to be supplied, production of which has negative side effects, period.

Let's just call it a day on civilization and all (starve to death so that the few survivors can) go back to living in caves or up the trees.

The real questions are, a) how much more energy use are LLMs causing, and b) what value this provides. Just taking this directly, without going into the weeds of meta-level topics like the benefits of investment in compute and energy infrastructure, and how this is critical to solving climate problems - just taking this directly, already this becomes a nothing-burger, because LLMs are by far some of the least questionable ways to use energy humanity has.

replies(2): >>45048174 #>>45059557 #
29. _DeadFred_ ◴[] No.45045801{3}[source]
If your product wouldn't exist without inputting someone else's product, it is derivative of that someone else's product. This isn't a human learning. This is a corporate, for profit product, it is derivative, and violates copyright.
replies(1): >>45046095 #
30. didibus ◴[] No.45045950{6}[source]
> so I won't address the 20% figure

Ok, but that's the figure that would be alarming, AI is projected to consume 20% of the global energy production by 2030... That's not like the mining industry...

> I find that in general energy doesn't tend to get spent unless there's something to be gained from it

Yes, you'd fall in the #2 conclusion bucket. This is a value judgement, not a factual or logical contradiction. You accept the trade off and find it worth it. That's totally fair, but in no way does it remove or mitigate the environmental impact argument, it just judges it an acceptable cost.

31. TeMPOraL ◴[] No.45045963{3}[source]
You do understand what "exponential" in the "exponential growth" means?

Yes, it means that "suddenly" we need to do more of everything than we did for entirety of human history until ~few years ago. Same was true ~few years ago. And ~few years before that. And so on.

That's what exponential growth means. Correct for that, and suddenly we're not really doing things that much faster "because AI" than we'd be doing them otherwise.

32. ACCount37 ◴[] No.45046095{4}[source]
That's not the standard we hold "human generated" media to. Not even "mockbusters" are illegal under copyright law. Nothing is new and everything is a remix. And I see no reason to make an exception for AI.

Copyright law is a disgrace, and copyright should be cut down massively - not made into an even more far-reaching anti-freedom abomination than it already is.

replies(1): >>45046108 #
33. jacquesm ◴[] No.45046108{5}[source]
> Nothing is new and everything is a remix.

This is absolutely not true.

34. tremon ◴[] No.45046675[source]
https://www.eesi.org/articles/view/data-centers-and-water-co...

> Together, the nation’s 5,426 data centers consume billions of gallons of water annually. One report estimated that U.S. data centers consume 449 million gallons of water per day and 163.7 billion gallons annually (as of 2021)

> Approximately 80% of the water (typically freshwater) withdrawn by data centers evaporates, with the remaining water discharged to municipal wastewater facilities.

35. joquarky ◴[] No.45048174{3}[source]
Yeah, the OP's argument could also be used to shame people for playing video games.

How much power does a typical gaming rig draw these days?

replies(2): >>45055319 #>>45059572 #
36. joquarky ◴[] No.45048201{3}[source]
Just a note on etiquette: starting your sentence with "Uh," is often interpreted as dismissive or condescending, even if that’s not your intent.
37. dpoloncsak ◴[] No.45050989[source]
I see. They actually specifically mention they did NOT account for training. Not sure how I misread that so poorly
replies(1): >>45051881 #
38. AlecSchueler ◴[] No.45051745[source]
> Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?

No, they showed that the environmental impact of using a smaller AI embedded on Google results uses less power to train and run than using something SOA. That's quite a different thing altogether.

> I don't ask a lot of race-based questions to my LLMS I guess

You don't need to ask explicit questions to receive answers where bias is implicitly stated. You've dismissed the argument out of hand without actually meeting it.

> I won't deny it's an issue but to act like it's being ignored by the industry is a miss completely.

The claim was that critics had been vocal about it, not that it had been ignored by the industry.

> I also grew up being told that ANYTHING on the internet was for the public, and copyright never stopped me from saving images or pirating movies.

Policing is always very patchy. You maybe broke the law and got away with it as an individual, that's common. The issue is that these huge businesses can do a level of copyright infringement, and do it on a for-profit basis, while smaller businesses would be eradicated for attempting the same thing, and the artists they're taking from would face similar issues if they attempted even a fraction of that level of plagiarism.

39. rsynnott ◴[] No.45051881{3}[source]
I saw _quite a few_ people trying to claim that it included training, even though it clearly didn't, so maybe that?

Also, note that it is the _median_ usage for Gemini. One would assume that the median Gemini usage is that pointlessly terrible Google Search results widgets, the one that tells people to eat rocks. Which you've got to assume is on the small side, model-wise.

40. viridian ◴[] No.45055319{4}[source]
The logical end step of these trains of thoughts is always the same. If you aren't contributing to the solution in a big way, you should kill yourself. And even if you can't take that step, you should absolutely not have children, and advocate that others do the same.

Viewing energy use as an axiomatic evil necessarily leads to the removal of man from the earth.

41. viridian ◴[] No.45055825{5}[source]
I starting writing a response to your post, but as I kept writing and investigating, it became clear that the MIT article you linked is just overflowing with false statements, half truths, stretched truths, and unsourced information.

It is legitimately one of the most misleading pieces of press I've read in a while.

The 21% value is unsourced, the single image = full phone charge is wrong in so many ways I had written 3 paragraphs picking apart both the MIT publication and the huggingface paper's methodology, and so on.

I'm happy to be given evidence that AI is ruinous in terms of more than its social effects, but this publication has made me incredibly suspicious of anyone claiming this to be the case.

42. sonofhans ◴[] No.45059557{3}[source]
Moving the goal posts, IMO. The post I was replying to said “there is no harm.” That’s all I was contradicting. You can argue all day that the harm is _worth it_, but that’s not what OP was doing.
43. sonofhans ◴[] No.45059572{4}[source]
No shaming in my argument, only pointing out that the “no harms” claim is bullshit.