Most active commenters

    ←back to thread

    283 points ghuntley | 15 comments | | HN request time: 0.203s | source | bottom
    1. modeless ◴[] No.45134728[source]
    Wait, PCIe bandwidth is higher than memory bandwidth now? That's bonkers, when did that happen? I haven't been keeping up.

    Just looked at the i9-14900k and I guess it's true, but only if you add all the PCIe lanes together. I'm sure there are other chips where it's even more true. Crazy!

    replies(4): >>45134749 #>>45134790 #>>45135433 #>>45136511 #
    2. adgjlsfhk1 ◴[] No.45134749[source]
    on server chips it's kind of ridiculous. 5th gen Epyc has 128 lanes of PCIEx5 for over 1TB/s of pcie bandwith (compared to 600GB/s RAM bandwidth from 12 channel ddr5 at 6400)
    replies(1): >>45134783 #
    3. andersa ◴[] No.45134783[source]
    Your math is a bit off. 128 lanes gen5 is 8 times x16, which has a combined theoretical bandwidth of 512GB/s, and more like 440GB/s in practice after protocol overhead.

    Unless we are considering both read and write bandwidth, but that seems strange to compare to memory read bandwidth.

    replies(2): >>45134856 #>>45135199 #
    4. DiabloD3 ◴[] No.45134790[source]
    "No."

    DDR5-8000 is 64GB/s per channel. Desktop CPUs have two channels. PCI-E 5.0 in x16 is 64GB/s. Desktops have one x16.

    replies(3): >>45134821 #>>45135236 #>>45136770 #
    5. modeless ◴[] No.45134821[source]
    Hmm, Intel specs the max memory bandwidth as 89.6 GB/s. DDR5-8000 would be out of spec. But I guess it's pretty common to run higher specced memory, while you can't overclock PCIe (AFAIK?). Even so I guess my math was wrong, it doesn't quite add up to more than memory bandwidth. But it's pretty darn close!
    replies(1): >>45135087 #
    6. pclmulqdq ◴[] No.45134856{3}[source]
    People like to add read and write bandwidth for some silly reason. Your units are off, too, though: gen 5 is 32 GT/s, meaning 64 GB/s (or 512 gigabits per second) each direction on an x16 link.
    replies(1): >>45134869 #
    7. andersa ◴[] No.45134869{4}[source]
    I meant for all 128 lanes being used, not each x16. Then you get 512GB/s.
    8. DiabloD3 ◴[] No.45135087{3}[source]
    There is a difference between recommended and max achievable.

    Zen 5 can hit that (and that's what I run), and Arrow Lake can also.

    The recommended from AMD on Zen 4 and 5 is 6000 (or 48x2), for Arrow Lake is 6400 (or 51.2x2); both of them continue increase in performance up to 8000, both of them have extreme trouble going past 8000 and getting a stable machine.

    9. wmf ◴[] No.45135199{3}[source]
    PCIe is full duplex while DDR5 is half duplex so in theory PCIe is higher. It's rare to max out PCIe in both directions though.
    replies(1): >>45135223 #
    10. mrcode007 ◴[] No.45135223{4}[source]
    happens frequently in fact when training neural nets on modern hw
    11. pseudosavant ◴[] No.45135236[source]
    One x16 slot. They'll use PCIe lanes in other slots (x4, x1, M2 SSDs) and also for devices off the chipset (network, USB, etc). The current top AMD/Intel CPUs can do ~100GB/sec over 28 lanes of mostly PCIe 5.
    12. AnthonyMouse ◴[] No.45135433[source]
    > Wait, PCIe bandwidth is higher than memory bandwidth now?

    Hmm.

    Somebody make me a PCIe card with RDIMM slots on it.

    replies(1): >>45135538 #
    13. thulle ◴[] No.45135538[source]
    https://www.servethehome.com/inventec-96-dimm-cxl-expansion-...

    https://www.servethehome.com/micron-cz120-cxl-memory-module-...

    14. rwmj ◴[] No.45136511[source]
    That's the promise (or requirement?) of CXL - have your memory managed centrally, servers access it over PCIe. https://en.wikipedia.org/wiki/Compute_Express_Link I wonder how many are actually using CXL. I haven't heard of any customers deploying it so far.
    15. immibis ◴[] No.45136770[source]
    But my Threadripper has 4 channels of DDR5, and the equivalent of 4.25 x16 PCIe 5.

    You know what adds up to an even bigger number though? Using both.