←back to thread

170 points PaulHoule | 1 comments | | HN request time: 0.3s | source
Show context
pama ◴[] No.45124182[source]
Sauro, if you read this, please refrain from such low-content speculative statements:

“On a loose but telling note, this is still three decades short of the number of neural connections in the human brain, 1015, and yet they consume some one hundred million times more power (GWatts as compared to the very modest 20 Watts required by our brains).”

No human brain could have time to read all the materials of a modern LLM training run even if they lived and read eight hours a day since humans first appeared over 300,000 years ago. More to the point, inference of an LLM is way more energy efficient than human inference (see the energy costs of a B200 decoding a 671B parameter model and estimate the energy needed to write the equivalent of a human book worth of information as part of a larger batch). The main reason for the large energy costs of inference is that we are serving hundreds of millions of people with the same model. No humans have this type of scaling capability.

replies(4): >>45124272 #>>45124735 #>>45125473 #>>45127090 #
1. skeezyboy ◴[] No.45125473[source]
> The main reason for the large energy costs of inference is that we are serving hundreds of millions of people with the same model.

its because thats how LLMs work, not because theyre so popular