Most active commenters

    ←back to thread

    2025 AI Index Report

    (hai.stanford.edu)
    166 points INGELRII | 11 comments | | HN request time: 0.001s | source | bottom
    Show context
    Signez ◴[] No.43645619[source]
    Surprised not to see a whole chapter on the environment impact. It's quite a big talking point around here (Europe, France) to discredit AI usage, along with the usual ethics issues about art theft, job destruction, making it easier to generate disinformation and working conditions of AI trainers in low-income countries.

    (Disclaimer: I am not an anti-AI guy — I am just listing the common talking points I see in my feeds.)

    replies(7): >>43645778 #>>43645779 #>>43645786 #>>43645888 #>>43646134 #>>43646161 #>>43646204 #
    1. simonw ◴[] No.43645778[source]
    Yeah, it would be really useful to see a high quality report like this that addresses that issue.

    My strong intuition at the moment is that the environmental impact is greatly exaggerated.

    The energy cost of executing prompts has dropped enormously over the past two years - something that's reflected in this report when it says "Driven by increasingly capable small models, the inference cost for a system performing at the level of GPT-3.5 dropped over 280-fold between November 2022 and October 2024". I wrote a bit about that here: https://simonwillison.net/2024/Dec/31/llms-in-2024/#the-envi...

    We still don't have great numbers on training costs for most of the larger labs, which are likely extremely high.

    Llama 3.3 70B cost "39.3M GPU hours of computation on H100-80GB (TDP of 700W) type hardware" which they calculated as 11,390 tons CO2eq. I tried to compare that to fully loaded passenger jet flights between London and New York and got a number of between 28 and 56 flights, but I then completely lost confidence in my ability to credibly run those calculations because I don't understand nearly enough about how CO2eq is calculated in different industries.

    The "LLMs are an environmental catastrophe" messaging has become so firmly ingrained in our culture that I think it would benefit the AI labs themselves enormously if they were more transparent about the actual numbers.

    replies(4): >>43645865 #>>43646268 #>>43646879 #>>43648009 #
    2. tmpz22 ◴[] No.43645865[source]
    If I were an AI advocate I'd push the environmental angle to distract from IP and other (IMO bigger and immediate concerns) like DOGE using AI to audit government agencies and messages, or AI generated discourse driving every modern social platform.

    I think the biggest mistake liberals make (I am one) is that they expect disinformation to come against their beliefs when the most power disinformation comes bundled with their beliefs in the form of misdirection, exaggeration, or other subterfuge.

    replies(2): >>43646018 #>>43646265 #
    3. dleeftink ◴[] No.43646018[source]
    How is that a mistake? Isn't that the exact purpose of propaganda?
    4. __loam ◴[] No.43646265[source]
    The biggest mistake liberals have made is thinking leaving the markets to their own devices wouldn't lead to an accumulation of wealth so egregious that the nation collapses into fascism as the wealthy use their power to dismantle the rule of law.
    replies(1): >>43654431 #
    5. mentalgear ◴[] No.43646268[source]
    To assess the env impact, I think we need to look a bit further:

    While the single query might have become more efficient, we would also have to relate this to the increased volume of overall queries. E.g in the last few years, how many more users, and queries per user were requested.

    My feeling is that it's Jevons paradox all over.

    replies(2): >>43646901 #>>43647950 #
    6. pera ◴[] No.43646879[source]
    > Global AI data center power demand could reach 68 GW by 2027 and 327 GW by 2030, compared with total global data center capacity of just 88 GW in 2022.

    "AI's Power Requirements Under Exponential Growth", Jan 28, 2025:

    https://www.rand.org/pubs/research_reports/RRA3572-1.html

    As a point of reference: The current demand in the UK is 31.2 GW (https://grid.iamkate.com/)

    7. fc417fc802 ◴[] No.43646901[source]
    The training costs are amortized over inference. More lifetime queries means better efficiency.

    Individual inferences are extremely low impact. Additionally it will be almost impossible to assess the net effect due to the complexity of the downstream interactions.

    At 40M 700W GPU hours 160 million queries gets you 175Wh per query. That's less than the energy required to boil a pot of pasta. This is merely an upper bound - it's near certain that many times more queries will be run over the life of the model.

    8. signatoremo ◴[] No.43647950[source]
    LLM usage increase may be offset by the decrease of search or other use of phone/computer.

    Can you quantify how much less driving resulted from the increase of LLM usage? I doubt you can.

    9. mbs159 ◴[] No.43648009[source]
    > ... I then completely lost confidence in my ability to credibly run those calculations because I don't understand nearly enough about how CO2eq is calculated in different industries.

    There is a lot of heated debate on the "correct" methodology for calculating CO2e in different industries. I calculate it in my job and I have to update the formulas and variables very often. Don't beat yourself over it. :)

    10. achierius ◴[] No.43654431{3}[source]
    You imagine that this is a mistake, but it wouldn't be the first time that liberals went hand-in-hand with fascism to protect their capital.
    replies(1): >>43657885 #
    11. __loam ◴[] No.43657885{4}[source]
    The mistake is not understanding the inevitability.