←back to thread

283 points ghuntley | 6 comments | | HN request time: 0.455s | source | bottom
Show context
modeless ◴[] No.45134728[source]
Wait, PCIe bandwidth is higher than memory bandwidth now? That's bonkers, when did that happen? I haven't been keeping up.

Just looked at the i9-14900k and I guess it's true, but only if you add all the PCIe lanes together. I'm sure there are other chips where it's even more true. Crazy!

replies(4): >>45134749 #>>45134790 #>>45135433 #>>45136511 #
1. adgjlsfhk1 ◴[] No.45134749[source]
on server chips it's kind of ridiculous. 5th gen Epyc has 128 lanes of PCIEx5 for over 1TB/s of pcie bandwith (compared to 600GB/s RAM bandwidth from 12 channel ddr5 at 6400)
replies(1): >>45134783 #
2. andersa ◴[] No.45134783[source]
Your math is a bit off. 128 lanes gen5 is 8 times x16, which has a combined theoretical bandwidth of 512GB/s, and more like 440GB/s in practice after protocol overhead.

Unless we are considering both read and write bandwidth, but that seems strange to compare to memory read bandwidth.

replies(2): >>45134856 #>>45135199 #
3. pclmulqdq ◴[] No.45134856[source]
People like to add read and write bandwidth for some silly reason. Your units are off, too, though: gen 5 is 32 GT/s, meaning 64 GB/s (or 512 gigabits per second) each direction on an x16 link.
replies(1): >>45134869 #
4. andersa ◴[] No.45134869{3}[source]
I meant for all 128 lanes being used, not each x16. Then you get 512GB/s.
5. wmf ◴[] No.45135199[source]
PCIe is full duplex while DDR5 is half duplex so in theory PCIe is higher. It's rare to max out PCIe in both directions though.
replies(1): >>45135223 #
6. mrcode007 ◴[] No.45135223{3}[source]
happens frequently in fact when training neural nets on modern hw