Most active commenters
  • ryao(6)
  • danpalmer(3)
  • YetAnotherNick(3)

←back to thread

426 points benchmarkist | 41 comments | | HN request time: 0.001s | source | bottom
Show context
zackangelo ◴[] No.42179476[source]
This is astonishingly fast. I’m struggling to get over 100 tok/s on my own Llama 3.1 70b implementation on an 8x H100 cluster.

I’m curious how they’re doing it. Obviously the standard bag of tricks (eg, speculative decoding, flash attention) won’t get you close. It seems like at a minimum you’d have to do multi-node inference and maybe some kind of sparse attention mechanism?

replies(9): >>42179489 #>>42179493 #>>42179501 #>>42179503 #>>42179754 #>>42179794 #>>42180035 #>>42180144 #>>42180569 #
danpalmer ◴[] No.42179501[source]
Cerebras makes CPUs with ~1 million cores, and they're inferring on that not on GPUs. It's an entirely different architecture which means no network involved. It's possible they're doing this significantly from CPU caches rather than HBM as well.

I recommend the TechTechPotato YouTube videos on Cerebras to understand more of their chip design.

replies(3): >>42179509 #>>42179599 #>>42179717 #
1. accrual ◴[] No.42179717[source]
I hope we can buy Cerebras cards one day. Imagine buying a ~$500 AI card for your desktop and having easy access to 70B+ models (the price is speculative/made up).
replies(4): >>42179769 #>>42179834 #>>42180050 #>>42180265 #
2. chessgecko ◴[] No.42179769[source]
One day is doing some heavy heavy lifting here, we’re currently off by ~3-4 orders of magnitude…
replies(2): >>42179793 #>>42179975 #
3. accrual ◴[] No.42179793[source]
Thank you, for the reality check! :)
replies(1): >>42179931 #
4. killingtime74 ◴[] No.42179834[source]
Maybe not $500, but $500,000
5. thomashop ◴[] No.42179931{3}[source]
We have moved 2 orders of magnitude in the last year. Not that unreasonable
6. grahamj ◴[] No.42179975[source]
So 1000-10000 days? ;)
replies(1): >>42182198 #
7. danpalmer ◴[] No.42180050[source]
I believe pricing was mid 6 figures per machine. They're also like 8U and water cooled I believe. I doubt it would be possible to deploy one outside of a fairly top tier colo facility where they have the ability to support water cooling. Also imagine learning a new CUDA but that is designed for another completely different compute model.
replies(5): >>42180442 #>>42180470 #>>42180527 #>>42181229 #>>42181357 #
8. visarga ◴[] No.42180265[source]
You still have to pay for the memory. The Cerebras chip is fast because they use 700x more SRAM than, say, A100 GPUs. Loading the whole model in SRAM every time you compute one token is the expensive bit.
9. initplus ◴[] No.42180442[source]
Yeah you can see the cooling requirements by looking at their product images. https://cerebras.ai/wp-content/uploads/2021/04/Cerebras_Prod...

Thing is nearly all cooling. And look at the diameter on the water cooling pipes. Airflow guides on the fans are solid steel. Apparently the chip itself measures 21.5cm^2. Insane.

10. bboygravity ◴[] No.42180470[source]
That means it'll be close to affordable in 3 to 5 years if we follow the curve we've been on for the past decades.
replies(3): >>42180845 #>>42180967 #>>42181580 #
11. trsohmers ◴[] No.42180527[source]
Based on their S1 filing and public statements, the average cost per WSE system for their (~90% of their total revenue) largest customer is ~$1.36M, and I’ve heard “retail” pricing of $2.5M per system. They are also 15U and due to power and additional support equipment take up an entire rack.

The other thing people don’t seem to be getting in this thread that just to hold the weights for 405B at FP16 requires 19 of their systems since it is SRAM only… rounding up to 20 to account for program code + KV cache for the user context would mean 20 systems/racks, so well over $20M. The full rack (including support equipment) also consumes 23kW, so we are talking nearly half a megawatt and ~$30M for them to be getting this performance on Llama 405B

replies(5): >>42180544 #>>42181290 #>>42181897 #>>42181931 #>>42190965 #
12. danpalmer ◴[] No.42180544{3}[source]
Thank you, far better answer than mine! Those are indeed wild numbers, although interestingly "only" 23kw, I'd expect the same level of compute in GPUs to be quite a lot more than that, or at least higher power density.
replies(1): >>42180615 #
13. YetAnotherNick ◴[] No.42180615{4}[source]
You get ~400TFLOP/s in H100 for 350W. You need (2 * token/s * param count) FLOP/s. For 405b, 969tok/s you just need 784 TFLOP/s which is just 2 H100s.

The limiting factor with GPU for inference is memory bandwidth. For 969 tok/s in int8, you need 392 TB/s memory bandwidth or 200 H100s.

replies(3): >>42182416 #>>42182750 #>>42190957 #
14. schoen ◴[] No.42180845{3}[source]
How have power and cooling been doing with respect to chip improvements? Have power requirements per operation been coming down rapidly, as other features have improved?

My recollection from PC CPUs is that we've gotten many more operations per second, and many more operations per second per dollar, but that the power and corresponding cooling requirements for the CPUs have tended to go up as well. I don't really know what power per operation has looked like there. (I guess it's clearly improved, though, because it seems like the power consumption of a desktop PC has only increased by a single order of magnitude, while the computational capacity has increased by more than that.)

A reason that I wonder about this in this context is that people are saying that the power and cooling requirements for these devices are currently enormous (by individual or hobbyist standards, not by data center standards!). If we imagine a Moore's Law-style improvement where the hardware itself becomes 1/10 or 1/100 of its current price, would we expect the overall power consumption to be similarly reduced, or to remain closer to its current levels?

replies(1): >>42180977 #
15. dheera ◴[] No.42180967{3}[source]
It will also mean 405B models will be uninteresting in 3 to 5 years if we follow the curve we've been on for the past decades.
replies(1): >>42181405 #
16. chaxor ◴[] No.42180977{4}[source]
Mooers law in the consumer space seems to be pretty much asymptoting now, as indicated by Apple's amazing Macbooks with an astounding 8GB of RAM. Data center compute is arguable, as it tends to be catered to some niche, making it confusing (cerebras as an example vs GPU datacenters vs more standard HPC). Also Clusters and even GPUs don't really fit in to Mooers law as originally framed.
replies(1): >>42181319 #
17. szundi ◴[] No.42181229[source]
Parent wishes 70b not 405b though
18. meowface ◴[] No.42181290{3}[source]
Thank you for the breakdown. Bit of an emotional journey.

"$500 in the future...? Oh, $30 million now, so that might be a while..."

replies(1): >>42181646 #
19. saagarjha ◴[] No.42181319{5}[source]
Apple doesn’t sell those anymore.
replies(1): >>42184690 #
20. wkat4242 ◴[] No.42181357[source]
Yeah but what is in a 4090 is also comparable to a whole rack of servers a decade ago. The tech will get smaller.
21. int_19h ◴[] No.42181405{4}[source]
I don't think they'll be uninteresting. They won't be cutting-edge anymore, sure, but much of the more practical applications of AI that we see today don't run on today's cutting-edge models, either. We're always going to have a certain compute budget, and if a smaller model does the job fine, why wouldn't you use it, and use the rest for something else (or use all of it to run the smaller model faster).
22. dgfl ◴[] No.42181580{3}[source]
Not really. These are wafer-scale chips, which (as far as I'm aware) were first introduced by Cerebras.

Cost reduction for cutting-edge products in the semiconductor industry has historically been driven by 1) reducing transistor size (by following the Dennard scaling laws), and 2) a variety of techniques (e.g. high-k dielectrics and strained silicon, or FinFETs and now GAAFETs) to improve transistor performance further. These techniques added more steps during manufacturing, but they were inexpensive enough that they allowed to reduce $/transistor still. In the last few years, we've had to pull off ever more expensive tricks which stopped the $/transistor progress. This is why the phrase "Moore's law is dead" has been circulating for a while.

In any case, higher performance transistors means that you can get the same functionality for less power and a smaller area, meaning that iso-functionality chips are cheaper to build in bulk. This is especially true for older nodes, e.g. look at the absurdly low price of most microcontrollers.

On the other hand, $/wafer is mostly a volume-related metric based on less scalable technology and more conventional manufacturing (relatively speaking). Cerebra's innovation was in making a wafer-scale chip possible, which is conventionally hard due to unavoidable manufacturing defects. But crucially, such a product (by definition) cannot scale like any other circuit produced so far.

It may for sure drop in price in the future, especially once it gets obsolete. But I don't expect it to ever reach consumer level prices.

replies(1): >>42182481 #
23. jamalaramala ◴[] No.42181646{4}[source]
It took 30 years for computers go from entire rooms to desktops, and another 30 years to go from desktops to our pockets.

I don't know if we can extrapolate, but I can imagine AI inference on our desktops for $500 in a few years...

replies(2): >>42182572 #>>42197460 #
24. sumedh ◴[] No.42181897{3}[source]
> Based on their S1 filing and public statements

Is it a good stock to buy :)

25. petra ◴[] No.42181931{3}[source]
Given those details they seem not much better on cost per token than nvidia based systems.
26. Yizahi ◴[] No.42182198{3}[source]
In a few thousand days (c) St. Altman
replies(1): >>42194265 #
27. latchkey ◴[] No.42182416{5}[source]
Memory bandwidth and memory size. Along with power/cooling density.

Hence why you see AMD's MI325x coming out with 256GB HBM3e, but it is the same FLOPs as a 300x. 6TB/s too, which outperforms H200's, by a lot.

You can see the direction AMD is going with this...

https://www.amd.com/en/products/accelerators/instinct/mi300/...

28. adrian_b ◴[] No.42182481{4}[source]
Wafer-scale chips have been attempted for many decades, but none of the previous attempts before Cerebras has resulted in a successful commercial product.

The main reason why Cerebras has succeeded and the previous attempts have failed is not technical, but the existence of market demand.

Before ML/AI training and inference, there has been no application where wafer-scale chips could provide enough additional performance to make their high cost worthwhile.

replies(1): >>42190978 #
29. stefs ◴[] No.42182572{5}[source]
well, we can AI inference on our desktops for $500 today, just with smaller models and far slower.
replies(1): >>42197536 #
30. Const-me ◴[] No.42182750{5}[source]
> For 969 tok/s in int8, you need 392 TB/s memory bandwidth

I think that math is only valid for batch size = 1. When these 969 tokens/second come from multiple sessions of the same batch, loaded model tensor elements are reused to compute many tokens for the entire batch. With large enough batches, you can even saturate compute throughput of the GPU instead of bottlenecking on memory bandwidth.

replies(1): >>42190945 #
31. chaxor ◴[] No.42184690{6}[source]
Aw man, are they selling only 4GB ones now?

More seriously, even 16GB was essentially the 'norm' in consumer PCs about 15 years ago.

32. ryao ◴[] No.42190945{6}[source]
They claim to obtain that number with 8 to 20 concurrent users:

https://x.com/draecomino/status/1858998347090325846

33. ryao ◴[] No.42190957{5}[source]
Memory bandwidth for inferencing does not scale with the number of GPUs. Scaling instead requires more concurrent users. Also, I am told that 8 H100 cards can achieve 600 to 1000 tokens per second with concurrent users.
replies(1): >>42193142 #
34. ryao ◴[] No.42190965{3}[source]
From what I have read, it is a maximum of 23 kW per chip and each chip goes into a 16U. That said, you would need at least 460 kW power to run the setup you described.

As for retail pricing being $2.5 million, I read $2 million in a news article earlier this year. $2.5 million makes it sound even worse.

35. ryao ◴[] No.42190978{5}[source]
Cerebras has a patent on the technique used to etch across scribe lines. Is there any prior work that would invalidate that patent?

By the way, I am a software developer, so you will not see me challenging their patent. I am just curious.

36. YetAnotherNick ◴[] No.42193142{6}[source]
8 H100 could achieve lot more than 1000 token/sec.

> Memory bandwidth for inferencing does not scale with the number of GPU

It does

replies(1): >>42197442 #
37. grahamj ◴[] No.42194265{4}[source]
lol I almost said that too
38. ryao ◴[] No.42197442{7}[source]
This is on llama 3.1 405B.

Inferencing is memory bandwidth bound. Add more GPUs on a batch size 1 inference problem and watch it run no faster than the memory bandwidth of a single GPU. It does not scale across the number of GPUs. If it could, you would see clusters of Nvidia hardware outperforming Cerebras’ hardware. That is currently a fantasy.

replies(1): >>42200967 #
39. ◴[] No.42197460{5}[source]
40. ryao ◴[] No.42197536{6}[source]
There is no need to use smaller models. You can run the biggest models such as llama 3.1 405B on a fairly low end desktop today:

https://github.com/lyogavin/airllm

However, it will be far slower as you said.

41. YetAnotherNick ◴[] No.42200967{8}[source]
This two sources[1][2] shows 1500-2500 token/per second on 8*H100.

[1]: https://lmsys.org/blog/2024-07-25-sglang-llama3/?ref=blog.ru...

[2]: https://www.snowflake.com/engineering-blog/optimize-llms-wit...