←back to thread

Meta's open AI hardware vision

(engineering.fb.com)
212 points GavCo | 6 comments | | HN request time: 0s | source | bottom
1. seydor ◴[] No.41851732[source]
They ve already gone after openAi, are they after Nvidia now?
replies(2): >>41851801 #>>41852876 #
2. throwup238 ◴[] No.41851801[source]
No, this is a rack built on NVIDIA’s platform, so this is just more $$$ for them.
replies(3): >>41852155 #>>41852222 #>>41853060 #
3. moffkalast ◴[] No.41852155[source]
Yeah this is... nothing. At least nothing anyone worth less than a few billion could ever care about.

Would be far more interesting to see MTIA in an edge compute PCIe form.

4. throwaway48476 ◴[] No.41852222[source]
There's also an AMD rack and meta is big enough that they won't get blacklisted for it.
5. KaoruAoiShiho ◴[] No.41852876[source]
Sort of, while yes this uses Nvidia, one of Nvidia's moats or big advantages is its rack scale integration. AMD and other providers just can't scale up easily, they are behind in terms of connecting tons of GPUs together effectively. So doing this part themselves instead of buying Nvidia's much-hyped (deservedly) NVL72 solution, which is a nonpareil rack system with 72 GPUs in it, and then opensourcing it, opens the door to possibly integrating AMD GPUs in the future and this hurts Nvidia's moat.
6. mhandley ◴[] No.41853060[source]
The article talks about NVidia racks, but also about their DSF [0], which is Ethernet-based (as opposed to Infiniband) built on switches using Cisco and Broadcom chipsets and custom ASIC xPU accelerators such as their own MTIA [1] which is built for them by Broadcom. So they are talking more than one approach simultaneously.

[0] https://engineering.fb.com/2024/10/15/data-infrastructure/op...

[1] https://ai.meta.com/blog/next-generation-meta-training-infer...