←back to thread

195 points rbanffy | 8 comments | | HN request time: 0.012s | source | bottom
Show context
amelius ◴[] No.42177249[source]
Why the focus on AMD and Nvidia? It really isn't that hard to design a large number of ALU blocks into some silicon IP block and make them work together efficiently.

The real accomplishment is fabricating them.

replies(2): >>42177288 #>>42177324 #
talldayo ◴[] No.42177324[source]
> It really isn't that hard to design a large number of ALU blocks into some silicon IP block and make them work together efficiently.

It really is that hard, and the fabrication side of the issue the easy part from Nvidia's perspective - you just pay TSMC a shitload of money. Nvidia's real victory (besides leading on performance-per-watt) is that their software stack doesn't suck. They invested in complex shader units and tensor accelerators that scale with the size of the card rather than being restrained in puny and limited NPUs. CUDA unified this featureset and was industry-entrenched for almost a decade, which gave it pretty much any feature you could want be it crypto acceleration or AI/ML primitives.

The ultimate tragedy is that there was a potential future where a Free and Open Source CUDA alternative existed. Apple wrote the OpenCL spec for exactly that purpose and gave it to Khronos, but later abandoned it to focus on... checks clipboard MLX and Metal Performance Shaders. Oh, what could have been if the industry weren't so stingy and shortsighted.

replies(3): >>42177458 #>>42178281 #>>42182786 #
amelius ◴[] No.42177458[source]
> you just pay TSMC a shitload of money

I guess with money you can win any argument ...

replies(2): >>42177535 #>>42182822 #
talldayo ◴[] No.42177535[source]
Sure, Apple did the same thing with TSMC's 5nm node. They still lost in performance-per-watt in direct comparison with Nvidia GPUs using Samsung's 8nm node. Money isn't everything, even when you have so much of it that you can deny your competitors access to the tech you use.

Nvidia's lead is not only cemented by dense silicon. Their designs are extremely competitive, perhaps even a generational leap over what their competitors offer.

replies(1): >>42177688 #
amelius ◴[] No.42177688[source]
Let me phrase it differently.

If Nvidia pulls the plug we can still go to AMD and have a reasonable alternative.

If TSMC pulls the plug, however ...

replies(3): >>42178302 #>>42178959 #>>42182827 #
1. pjmlp ◴[] No.42182827[source]
What is the reasonable alternative to CUDA Fortran on AMD?

One example out of many I can point out from CUDA ecosystem.

replies(2): >>42182969 #>>42185062 #
2. amelius ◴[] No.42182969[source]
People use CUDA through a limited number of libraries, for example Torch and Tensorflow, so there isn't a really strong dependence on CUDA for many important applications.
replies(1): >>42183641 #
3. pjmlp ◴[] No.42183641[source]
Some people working in machine learning, do use CUDA via Torch and Tensorflow.
replies(1): >>42185491 #
4. my123 ◴[] No.42185062[source]
AMD ships a Fortran OpenMP compiler with GPU offloading that works pretty well
replies(1): >>42185508 #
5. amelius ◴[] No.42185491{3}[source]
Yes, most people in ML, and this field is currently on an exponential growth curve.
replies(1): >>42185514 #
6. pjmlp ◴[] No.42185508[source]
Made public 6 days ago.

https://www.phoronix.com/news/AMD-Next-Gen-Fortran-Compiler

replies(1): >>42186516 #
7. pjmlp ◴[] No.42185514{4}[source]
And a tiny percentage of why CUDA is as big as it is.
8. my123 ◴[] No.42186516{3}[source]
That's the next gen one. Older one based on classic Flang has been in production since quite a while.