←back to thread

204 points WithinReason | 2 comments | | HN request time: 0s | source
Show context
mistyvales ◴[] No.40712753[source]
Here I am still on PCI-E 3.0...
replies(3): >>40712764 #>>40713462 #>>40719763 #
daemonologist ◴[] No.40713462[source]
It felt like we were on 3 for a long time, and then all of a sudden got 4 through 6 (and soon 7) in quick succession. I'd be curious to know what motivated that - maybe GPGPU taking off?
replies(3): >>40713534 #>>40713976 #>>40714326 #
latchkey ◴[] No.40713534[source]
AI/GPU communication is definitely driving it forward now. It is a speed race for how quickly you can move data around.
replies(1): >>40714280 #
starspangled ◴[] No.40714280[source]
Really? I hadn't heard of GPU or GPGPU pushing bandwidth recently. Networking certainly does. 400GbE cards exceed PCIe 4.0 x16 bandwidth, 800 is here, and 1.6 apparently in the works. Disk too though, just because a single disk (or even network phy) may not max out a PCI slot does not mean you want to dedicate more lanes than necessary to them because you likely want a bunch of them.
replies(2): >>40714359 #>>40714425 #
1. p1esk ◴[] No.40714425[source]
Nvlink 4.0 used to connect H100 GPUs today is almost as fast as PCIe-7.0 (12.5GBs vs 16GBs). By the time PCIe-7.0 is available I’m sure NVlink will be much faster. So, yeah, GPUs are currently the most bandwidth hungry devices on the market.
replies(1): >>40715917 #
2. latchkey ◴[] No.40715917[source]
Will the lead time still be 50+ weeks though? My guess is yes.