Most active commenters
  • lostmsu(3)

←back to thread

I Am An AI Hater

(anthonymoser.github.io)
443 points BallsInIt | 15 comments | | HN request time: 0.204s | source | bottom
1. holbrad ◴[] No.45044352[source]
>Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright,

I just can't take anything the author has to say seriously after the intro.

replies(4): >>45044365 #>>45044386 #>>45044391 #>>45044469 #
2. gjsman-1000 ◴[] No.45044365[source]
And that's why Trump won the election.

I'm serious. This sentence perfectly captures what the coastal cities sound like to the rest of the US, and why they voted for the crazy uncle over something unintelligible.

replies(2): >>45044420 #>>45045077 #
3. 01HNNWZ0MV43FF ◴[] No.45044386[source]
Because they didn't explain it themselves, or because you disagree with the assessment?
4. miltonlost ◴[] No.45044391[source]
After the intro and all the links to the statements he's saying? Because which of those aren't actually true
replies(1): >>45044564 #
5. 01HNNWZ0MV43FF ◴[] No.45044420[source]
When I see how the voters vote and don't vote, I yearn for sortition
6. hofrogs ◴[] No.45044469[source]
All of those are links in the original text, do you think that these points aren't true? What makes it unserious?
replies(1): >>45044830 #
7. tensor ◴[] No.45044564[source]
Very few of them, if any, are true.

Firstly, the author doesn't even define the term AI. Do they just mean generative AI (likely), or all machine learning? Secondly, you can pick any of those and they would only be true of particular implementations of generative AI, or machine learning, it's not true of technology as a whole.

For instance, small edge models don't use a lot of energy. Models that are not trained on racist material won't be racist. Models not trained to give advice on suicide, or trained NOT to do such things, won't do it.

Do I even need to address the claim that it's at it's core rooted in "fascist" ideology? So all the people creating AI to help cure diseases, enable technologies assistive technologies for people with impairments, and other positive tasks, all these desires are fascist? It's ridiculous.

AI is a technology that can be used positively or negatively. To be sure many of the generative AI systems today do have issues associated with them, but the authors position of extending these issues to the entirety of the AI and AI practitioners, it's immoral and shitty.

I also don't care what the author has to say after the intro.

replies(1): >>45044711 #
8. traes ◴[] No.45044711{3}[source]
Come on now. You know he's not talking about small machine learning models or protein folding programs. When people talk about AI in this day and age they are talking about generative AI. All of the articles he links when bringing up common criticisms are about generative AI.

I too can hypothetically conceive of generative AI that isn't harmful and wasteful and dangerous, but that's not what we have. It's disingenuous to dismiss his opinion because the technology that you imagine is so wonderful.

replies(1): >>45044837 #
9. lostmsu ◴[] No.45044830[source]
It would take too much time to tear the entirety of this slop apart, but if you understand the mechanics of AI, you'd know environmental impact is negligible vs the value.

The links are laughable. For environment we get one lady whose underground water well got dirtier (according to her) because Meta built a data center nearby. Which, even if true (which is doubtful), has negligible impact on environment, and maybe a huge annoyance for her personally.

And 2 gives bad estimates such as ChatGPT 4 generation of ~100 tokens for an email (say 1000tok/s from 8xH100, so 0.1s so 0.1Wh) using as much energy as 14 LEDs for an hour (say 3W each, so 45Wh, so almost 3 orders of magnitude off, 9 if you like me count in binary).

P.S. Voted dems and would never vote Trump, but the gp is IMHO spot on.

replies(1): >>45045184 #
10. tensor ◴[] No.45044837{4}[source]
Deep image models are used in medical applications. LLMs have huge potential in literature searches and reference tracing.

Small models are still generative AI. The author nor you can even define what you are talking about. So yes, I can dismiss it.

11. simianwords ◴[] No.45045077[source]
Coastal city dwellers want the next thing to signal rebellion. Its just that AI serves as a way to do that plus also show some concern to the working class.
12. diamond559 ◴[] No.45045184{3}[source]
What value? It isn't even profitable. I think we spotted the stock holder...
replies(1): >>45045250 #
13. lostmsu ◴[] No.45045250{4}[source]
This is the dumbest question ever. I guess you need to ask 1B+ LLM users.

But hey, I already know you'd say you personally would never use it for these purposes.

Moreover, of the two of us you appear to have "shareholder" mentality. How profitable are volunteers serving food to homeless people? I guess they have no value then.

replies(1): >>45053576 #
14. prime_ursid ◴[] No.45053576{5}[source]
How many of those users are paying users? What’s the churn rate?

And how profitable are OpenAI and other providers?

They’re running at a loss. The startups using LLMs as their product are only viable as long as they get free credits from OpenAI. The only people making a profit are NVidia.

replies(1): >>45055776 #
15. lostmsu ◴[] No.45055776{6}[source]
Sounds like the last paragraph of my comment flew right over your head.