←back to thread

426 points benchmarkist | 5 comments | | HN request time: 0.803s | source
Show context
zackangelo ◴[] No.42179476[source]
This is astonishingly fast. I’m struggling to get over 100 tok/s on my own Llama 3.1 70b implementation on an 8x H100 cluster.

I’m curious how they’re doing it. Obviously the standard bag of tricks (eg, speculative decoding, flash attention) won’t get you close. It seems like at a minimum you’d have to do multi-node inference and maybe some kind of sparse attention mechanism?

replies(9): >>42179489 #>>42179493 #>>42179501 #>>42179503 #>>42179754 #>>42179794 #>>42180035 #>>42180144 #>>42180569 #
1. modeless ◴[] No.42179493[source]
Cerebras is a chip company. They are not using GPUs. Their chip uses wafer scale integration which means it's the physical size of a whole wafer, dozens of GPUs in one.

They have limited memory on chip (all SRAM) and it's not clear how much HBM bandwidth they have per wafer. It's a completely different optimization problem than running on GPU clusters.

replies(2): >>42180735 #>>42190988 #
2. why_only_15 ◴[] No.42180735[source]
they have about 125GB/s of off-chip bandwidth
replies(1): >>42180792 #
3. saagarjha ◴[] No.42180792[source]
Do they just not do HBM at all or
replies(1): >>42182712 #
4. why_only_15 ◴[] No.42182712{3}[source]
I'm not too up to date but as I recall there are a lot of weirdnesses because of how big their chip is (e.g. thermal expansion being a problem). I believe they have a single giant line in the middle of the chip for this reason. maybe this makes HBM etc. hard? certainly their chip would be more appealing if they cut down the # of cores by 10x, added matrix units and added HBM but looks like they're not going to go this way.
5. ryao ◴[] No.42190988[source]
They do not use HBM. Offchip memory is accessible at 150GB/sec.