They’re all blowing billions of dollars on NVIDIA hardware with like 70% margin and with triton backing PyTorch it shouldn’t be that hard to move off of CUDA stack.
They’re all blowing billions of dollars on NVIDIA hardware with like 70% margin and with triton backing PyTorch it shouldn’t be that hard to move off of CUDA stack.
For a small fraction of that they could poach a ton of people from NVIDIA and publish a new open chip spec that anyone could manufacture.
https://www.fool.com/investing/2024/09/12/46-nvidias-30-bill...
They all use SFDC, should they go and create and open source sales platform?
That's exactly what they id with their server design.
I'm saying come up with an open standard for tensor processing chips, with open drivers and core compute libraries, then let hardware vendors innovate and compete to drive down the price.
Meta spent like 10% of their revenue on ML hardware, it's not a drop in a bucket and with model scaling and large scale deployment these costs are not going down. https://www.datacenterdynamics.com/en/news/meta-to-operate-6...