←back to thread

451 points imartin2k | 1 comments | | HN request time: 0.281s | source
Show context
dang ◴[] No.44480332[source]
I also find these features annoying and useless and wish they would go away. But that's not because LLMs are useless, nor because the public isn't using them (as daishi55 pointed out here: https://news.ycombinator.com/item?id=44479578)

It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.

petekoomen made this point recently in a creative way: AI Horseless Carriages - https://news.ycombinator.com/item?id=43773813 - April 2025 (478 comments)

replies(8): >>44480760 #>>44480949 #>>44482041 #>>44482413 #>>44483325 #>>44483327 #>>44492263 #>>44494114 #
ToucanLoucan ◴[] No.44480949[source]
> It's because the integrations with existing products are arbitrary and poorly thought through, the same way that software imposed by executive fiat in BigCo offices for trend-chasing reasons has always been.

It's just rent-seeking. Nobody wants to actually build products for market anymore; it's a long process with a lot of risk behind it, and there's a chance you won't make shit for actual profit. If however you can create a "do anything" product that can be integrated with huge software suites, you can make a LOT of money and take a lot of mind-share without really lifting a finger. That's been my read on the "AI Industry" for a long time.

And to be clear, the integration part is the only part they give a shit about. Arguably especially for AI, since operating the product is so expensive compared to the vast majority of startups trying to scale. Serving JPEGs was never nearly as expensive for Instagram as responding to ChatGPT inquiries is for OpenAI, so they have every reason to diminish the number coming their way. Being the hip new tech that every CEO needs to ram into their product, irrespective of it does... well, anything useful, while also being so frustrating or obtuse for users to actually want to use, is arguably an incredibly good needle to thread, if they can manage it.

And the best part is, if OpenAI's products do actually do what they say on the tin, there's a good chance many lower rungs of employment will be replaced with their stupid chatbots, again irrespective of whether or not they actually do the job. Businesses run on "good enough." So it's great, if OpenAI fails, we get tons of useless tech injected into software products already creaking under the weight of so much bullhockety, and if they succeed, huge swaths of employees will be let go from entry level jobs, flooding the market, cratering the salary of entire categories of professions, and you'll never be able to get a fucking problem resolved with a startup company again. Not that you probably could anyway but it'll be even more frustrating.

And either way, all the people responsible for making all your technology worse every day will continue to get richer.

replies(2): >>44481827 #>>44482893 #
Peritract ◴[] No.44482893[source]
> if OpenAI fails, we get tons of useless tech injected into software products already creaking under the weight of so much bullhockety, and if they succeed, huge swaths of employees will be let go from entry level jobs

I think this is the key idea. Right now it doesn't work that well, but if it did work as advertised, that would also be bad.

replies(1): >>44483499 #
ryandrake ◴[] No.44483499[source]
That's the thing I hate most about the whole AI frenzy: If it doesn't work, it's horrible, and if it does work, it's also horrible but for different reasons. The whole thing is a giant shit sandwich, and the only upside is for the few already-rich people serving it to us.
replies(1): >>44485655 #
DrillShopper ◴[] No.44485655[source]
And regardless of whether or not it works - it's pumping giant amounts of CO2 into the atmosphere which isn't a strictly local problem.
replies(1): >>44489382 #
DocTomoe ◴[] No.44489382[source]
Any time a new technology makes people uncomfortable, someone pulls the CO₂ card. We've seen this with cryptocurrencies, electric cars, even the internet itself.

But curiously, the same people rarely question the CO₂ footprint of things like gaming, streaming, international sports, live concerts, political campaigns, or even large-scale scientific research. Methane-fueled rockets and the LHC don't exactly run on solar-powered calculators, yet they're culturally or intellectually "approved" forms of emission.

Yes, AI consumes energy. So does everything else we choose to value. If we're serious about CO₂, then we need consistent standards — not just selective outrage. Either we cut fairly across the board, or we focus on making electricity cleaner and more sustainable, instead of trying to shame specific technologies into nonexistence (which, by the way, never happens).

replies(2): >>44489574 #>>44491675 #
DrillShopper ◴[] No.44489574[source]
Nice whataboutism, except if you had read any of my other comments in this topic you'd know that I think all of those activities need to be taken into account.

We should be evaluating every activity on benefit versus detriment when it comes to CO2, and AI hasn't passed the "more benefit than harm" threshold for most people paying attention.

Perhaps you can help me here since we seem to be on the topic - how would you rate long term benefit versus long term climate damage of AI as it exists now?

replies(1): >>44489918 #
1. DocTomoe ◴[] No.44489918[source]
Calling “whataboutism” is often just a way to derail those providing necessary context. It’s a rhetorical eject button — and more often than not, a sign someone isn’t arguing in good faith. But just on the off-chance you are one of "the good ones": Thank you for the clarification - and fair enough, I appreciate that you're applying the same standard broadly. That's more intellectually honest than most.

Now, do you also act on that in your private life? How beneficial, for instance, is your participation in online debate?

As for this phrase — "most people paying attention" — that’s weasel wording at its finest. It lets you both assert a consensus and discredit dissent in a single stroke. People who disagree? They’re just not paying attention, obviously. It’s a No True Scotsman — minus the kilts.

As for your question: evaluating AI's long-term benefit versus long-term climate cost is tricky because the landscape is evolving fast. But here’s a rough sketch of where I currently stand.

Short-term climate cost: Yes, significant - especially in training large models and the massive scaling of data centers. But this is neither unique to AI nor necessarily linear; newer models (like LoRA-based systems) and infrastructure optimizations already aim to cut energy use significantly.

Short-term benefit: Uneven. Entertainment chatbots? Low direct utility — though arguably high in quality-of-life value for many. Medical imaging, protein folding, logistics optimization, or disaster prediction? Substantial.

Long-term benefit: If AI continues to improve and democratize access to knowledge, diagnosis, decision-making, and resource allocation — its potential social, medical, and economic impact could be enormous. Not just "nice-to-have" but truly transformative for global efficiency and resilience.

Long-term harm: If AI remains centralized, opaque, and energy-inefficient, it could deepen inequalities, increase waste, and consolidate power dangerously.

But even if AI causes twice the CO₂-output it causes today, and would only be used for ludicrous reasons, it pales to the CO₂ pollution causes by a single day of average American warfighting ... while still - differently from war fighting - having a net-positive outcome to AI users' lives.

So to answer directly:

Right now, AI is somewhere near the threshold. It’s not obviously "worth it" for every observer, and that’s fine. But it’s also not a luxury toy — not anymore. It’s a volatile but serious tool, and whether it tips toward benefit or harm depends entirely on how we build, govern, and use it.

Let me turn the question around: What would you need to see — in outcomes, not marketing — to say: "Yes. That was worth the carbon."?