Calling “whataboutism” is often just a way to derail those providing necessary context. It’s a rhetorical eject button — and more often than not, a sign someone isn’t arguing in good faith. But just on the off-chance you are one of "the good ones": Thank you for the clarification - and fair enough, I appreciate that you're applying the same standard broadly. That's more intellectually honest than most.
Now, do you also act on that in your private life? How beneficial, for instance, is your participation in online debate?
As for this phrase — "most people paying attention" — that’s weasel wording at its finest. It lets you both assert a consensus and discredit dissent in a single stroke. People who disagree? They’re just not paying attention, obviously. It’s a No True Scotsman — minus the kilts.
As for your question: evaluating AI's long-term benefit versus long-term climate cost is tricky because the landscape is evolving fast. But here’s a rough sketch of where I currently stand.
Short-term climate cost: Yes, significant - especially in training large models and the massive scaling of data centers. But this is neither unique to AI nor necessarily linear; newer models (like LoRA-based systems) and infrastructure optimizations already aim to cut energy use significantly.
Short-term benefit: Uneven. Entertainment chatbots? Low direct utility — though arguably high in quality-of-life value for many. Medical imaging, protein folding, logistics optimization, or disaster prediction? Substantial.
Long-term benefit: If AI continues to improve and democratize access to knowledge, diagnosis, decision-making, and resource allocation — its potential social, medical, and economic impact could be enormous. Not just "nice-to-have" but truly transformative for global efficiency and resilience.
Long-term harm: If AI remains centralized, opaque, and energy-inefficient, it could deepen inequalities, increase waste, and consolidate power dangerously.
But even if AI causes twice the CO₂-output it causes today, and would only be used for ludicrous reasons, it pales to the CO₂ pollution causes by a single day of average American warfighting ... while still - differently from war fighting - having a net-positive outcome to AI users' lives.
So to answer directly:
Right now, AI is somewhere near the threshold. It’s not obviously "worth it" for every observer, and that’s fine. But it’s also not a luxury toy — not anymore. It’s a volatile but serious tool, and whether it tips toward benefit or harm depends entirely on how we build, govern, and use it.
Let me turn the question around:
What would you need to see — in outcomes, not marketing — to say: "Yes. That was worth the carbon."?