←back to thread

283 points ghuntley | 1 comments | | HN request time: 0s | source
Show context
juancn ◴[] No.45134666[source]

    Because PCIe bandwidth is higher than memory bandwidth
This doesn't sound right, a PCIe 5.0 x16 slot offers up to 64 GB/s. That's fully saturated, a fairly old Xeon server can sustain >100 GB/s memory reads per numa node without much trouble.

Some newer HBM enabled, like a Xeon Max 9480 can go over 1.6TBs for HBM (up to 64GB) and DDR5 can reach > 300 GB/s.

Even saturating all PCIe lanes (196 on a dual socket Xeon 6), you could at most theoretically get ~784GB/s, which coincidentally is the max memory bandwidth of such CPUs (12 Channels x 8,800 MT/s = 105,600 MT/s total bandwidth or roughly ~784GB/s).

I mean, solid state IO is getting really close, but it's not so fast on non-sequential access patterns.

I agree that many workloads could be shifted to SSDs but it's still quite nuanced.

replies(1): >>45134692 #
1. jared_hulbert ◴[] No.45134692[source]
Not by a ton but if you add up the DDR5 channel bandwidth and the PCIe lanes most systems the PCIe bandwidth is higher. Yes. HBM and L3 cache will be higher than the PCIe.