←back to thread

507 points martinald | 1 comments | | HN request time: 0.203s | source
Show context
chillee ◴[] No.45057409[source]
This article's math is wrong on many fundamental levels. One of the most obvious ones is that prefill is nowhere near bandwidth bound.

If you compute out the MFU the author gets it's 1.44 million input tokens per second * 37 billion active params * 2 (FMA) / 8 [GPUs per instance] = 13 Petaflops per second. That's approximately 7x absolutely peak FLOPS on the hardware. Obviously, that's impossible.

There's many other issues with this article, such as assuming only 32 concurrent requests(?), only 8 GPUs per instance as opposed to the more efficient/standard prefill-decode disagg setups, assuming that attention computation is the main thing that makes models compute-bound, etc. It's a bit of an indictment of HN's understanding of LLMs that most people are bringing up issues with the article that aren't any of the fundamental misunderstandings here.

replies(5): >>45057603 #>>45057767 #>>45057801 #>>45058397 #>>45060353 #
1. johnnypangs ◴[] No.45060353[source]
As one of those people who doesn’t really understand llms, does anyone have any recommendations to better my understanding of them?