Most active commenters
  • pjmlp(7)
  • talldayo(3)
  • Wytwwww(3)

←back to thread

195 points rbanffy | 44 comments | | HN request time: 1.009s | source | bottom
Show context
pie420 ◴[] No.42176400[source]
layperson with no industry knowledge, but it seems like nvidia's CUDA moat will fall in the next 2-5 years. It seems impossible to sustain those margins without competition coming in and getting a decent slice of the pie
replies(5): >>42176440 #>>42177575 #>>42177944 #>>42178259 #>>42179625 #
1. metadat ◴[] No.42176440[source]
But how will AMD or anyone else push in? CUDA is actually a whole virtualization layer on top of the hardware and isn't easily replicable, Nvidia has been at it for 17 years.

You are right, eventually something's gotta give. The path for this next leg isn't yet apparent to me.

P.s. how much is an exaflop or petaflop, and how significant is it? The numbers thrown around in this article don't mean anything to me. Is this new cluster way more powerful than the last top?

replies(14): >>42176567 #>>42176711 #>>42176809 #>>42177061 #>>42177287 #>>42177319 #>>42177378 #>>42177451 #>>42177452 #>>42177477 #>>42177479 #>>42178108 #>>42179870 #>>42180214 #
2. bryanlarsen ◴[] No.42176567[source]
Anybody spending tens of billions annually on Nvidia hardware is going to be willing to spend millions to port their software away from CUDA.
replies(3): >>42176963 #>>42177463 #>>42182571 #
3. vlovich123 ◴[] No.42176711[source]
The API part isn't thaaat hard. Indeed HIP already works pretty well at getting existing CUDA code to work unmodified on AMD HW. The bigger challenge is that the AMD and Nvidia architectures are so different that the optimization choices for what the kernels would look like are more different between Nvidia and AMD than they would be between Intel and AMD in CPU land even including SIMD.
replies(1): >>42182561 #
4. sangnoir ◴[] No.42176809[source]
CUDA is the assembly to Torch's high-level language; for most, it's a very good intermediary, but an intermediary nonetheless, as it is between the actual code they are interested in, and the hardware that runs it.

Most customers care about cost-effectiveness more than best-in-class raw-performance, a fact that AMD has ruthlessly exploited over the past 8 years. It helps that AMD products are occasionally both.

replies(1): >>42182611 #
5. echelon ◴[] No.42176963[source]
For the average non-FAANG company, there's nothing to port to yet. We don't all have the luxury of custom TPUs.
6. LeanderK ◴[] No.42177061[source]
its possible. Just look at Apples GPU, its mostly supported by torch, what's left are mostly edge-cases. Apple should make a datacenter GPU :D that would be insanely funny. It's actually somewhat well positioned as, due to the MacBooks, the support is already there. I assume here that most things translate to linux, as I don't think you can sell MacOS in the cloud :D

I know a lot developing on apples silicon and just pushing it to clusters for bigger runs. So why not run it on an apple GPU there?

replies(2): >>42177409 #>>42178483 #
7. stonemetal12 ◴[] No.42177287[source]
According to Wikipedia the previous #1 was from 2022 with a peak petaflops of 2,055. This system is rated at 2,746. So about 33% faster than the old #1.

Also, of the top 10, AMD has 5 systems.

https://en.wikipedia.org/wiki/TOP500

8. smokel ◴[] No.42177319[source]
> P.s. how much is an exaflop or petaflop

1 petaflop = 10^15 flops = 1,000,000,000,000,000 flops.

1 exaflop = 10^18 flops = 1,000,000,000,000,000,000 flops.

Note that these are simply powers of 10, not powers of 2, which are used for storage for example.

9. fweimer ◴[] No.42177378[source]
Isn't porting software to the next generation supercomputer pretty standard for HPC?
10. talldayo ◴[] No.42177409[source]
> what's left are mostly edge-cases.

For everything that isn't machine learning, I frankly feel like it's the other way around. Apple's "solution" to these edge cases is telling people to write compute shaders that you could write in Vulkan or DirectX instead. What sets CUDA apart is an integration with a complex acceleration pipeline that Apple gave up trying to replicate years ago.

When cryptocurrency mining was king-for-a-day, everyone rushed out to buy Nvidia hardware because it supported accelerated crypto well from the start. The same thing happened with the AI and machine learning boom. Apple and AMD were both late to the party and wrongly assumed that NPU hardware would provide a comparable solution. Without a CUDA competitor, Apple would struggle more than AMD to find market fit.

replies(1): >>42177935 #
11. ok123456 ◴[] No.42177451[source]
People have been chipping away at this for a while. HIP allows source-level translation, and libraries like Jax provide a HIP version.
12. vitus ◴[] No.42177452[source]
> P.s. how much is an exaflop or petaflop, and how significant is it? The numbers thrown around in this article don't mean anything to me. Is this new cluster way more powerful than the last top?

Nominally, a measurement in "flops" is how many (typically 32-bit) FLoating-point Operations Per Second the hardware is capable of performing, so it's an approximate measure of total available computing power.

A high-end consumer-grade CPU can achieve on the order of a few hundred gigaflops (let's say 250, just for a nice round number). https://boinc.bakerlab.org/rosetta/cpu_list.php

A petaflop is therefore about four thousand of those; multiply by another thousand to get an exaflop.

For another point of comparison, a high-end GPU might be on the order of 40-80 teraflops. https://www.tomshardware.com/reviews/gpu-hierarchy,4388-2.ht...

replies(1): >>42179813 #
13. talldayo ◴[] No.42177463[source]
To slower hardware? What are they supposed to port to, ASICs?
replies(1): >>42177525 #
14. quickthrowman ◴[] No.42177477[source]
> But how will AMD or anyone else push in? CUDA is actually a whole virtualization layer on top of the hardware and isn't easily replicable, Nvidia has been at it for 17 years.

NVidia currently has 80-90% gross margins on their LLM GPUs, that’s all the incentive another company needs to invest money into a CUDA alternative.

15. NineStarPoint ◴[] No.42177479[source]
A high grade consumer gpu a (a 4090) is about 80 teraflops. So rounding up to 100, an exaflop is about 10,000 consumer grade cards worth of compute, and a petaflop is about 10.

Which doesn’t help with understanding how much more impressive these are than the last clusters, but does to me at least put the amount of compute these clusters have into focus.

replies(2): >>42177621 #>>42177989 #
16. adgjlsfhk1 ◴[] No.42177525{3}[source]
if the hardware is 30% slower and 2x cheaper, that's a pretty great deal.
replies(1): >>42177861 #
17. vitus ◴[] No.42177621[source]
You're off by three orders of magnitude.

My point of reference is that back in undergrad (~10-15 years ago), I recall a class assignment where we had to optimize matrix multiplication on a CPU; typical good parallel implementations achieved about 100-130 gigaflops (on a... Nehalem or Westmere Xeon, I think?).

replies(2): >>42177945 #>>42178222 #
18. selectodude ◴[] No.42177861{4}[source]
Power density tends to be the limiting factor for this stuff, not money. If it's 30 percent slower per watt, it's useless.
replies(1): >>42178459 #
19. LeanderK ◴[] No.42177935{3}[source]
well, but machine learning is the major reason we use GPUs in the datacenter (not talking about consumer GPUs here). The others are edge-cases for data-centre applications! Apple is uniquely positioned exactly because it is already solved due to a significant part of the ML-engineers using MacBooks to develop locally.

The code to run these things on apples GPUs exist and is used every day! I don't know anyone using AMD GPUs, but pretty often its nvidia on the cluster and Apple on the laptop. So if nvidia is making these juicy profits, i think apple could seriously think about moving to the cluster if it wants to.

replies(1): >>42179042 #
20. NineStarPoint ◴[] No.42177945{3}[source]
You are 100% correct, I lost a full prefix of performance there. Edited my message.

Which does make the clusters a fair bit less impressive, but also a lot more sensibly sized.

21. winwang ◴[] No.42177989[source]
4090 tensor performance (FP8): 660 teraflops, 1320 "with sparsity" (i.e. max theoretical with zeroes in the right places).

https://images.nvidia.com/aem-dam/Solutions/geforce/ada/nvid...

But at these levels of compute, the memory/interconnect bandwidth becomes the bottleneck.

22. okdood64 ◴[] No.42178108[source]
Maybe the DOJ will come in and call it anti-trust shenanigans.

Not that I would want this...

23. ◴[] No.42178222{3}[source]
24. Wytwwww ◴[] No.42178459{5}[source]
The ratio between power usage and GPU cost is very, very different than with CPUs, though. If you could save e.g. 20-30% of the purchase price that might make it worth it.

e.g. you could run a H100 at 100% utilization 24/7 for 1 years at $0.4 per kWh (so assuming significant overhead for infrastructure etc.) and that would only cost ~10% of the purchase price of the GPU itself.

replies(1): >>42179046 #
25. Wytwwww ◴[] No.42178483[source]
> Apple should make a datacenter GPU

Aren't their GPUs pretty slow, though? Not even remotely close to Nvidia's consumer GPU with only (significant) upside being the much higher memory capacity.

26. talldayo ◴[] No.42179042{4}[source]
Software developers using Macbooks doesn't mean Apple solved the ML problem. The past 10 years of MacOS removing features has somewhat proved that software developers will keep using Macs even when the featureset regresses. Like how Apple used to support OpenCL as a CUDA alternative, but gave up on it altogether to focus on simpler, mobile-friendly GPU designs.

The Pytorch MPS patches are a fun appeasement for developers, but they didn't unthrone Nvidia's demand. They didn't beat Nvidia on performance per watt, they didn't match their price, their scale or CUDA's featureset, and they don't even provide basic server drivers. It's got nothing to do with what brand you prefer and everything to do with what makes actual sense in a datacenter. Apple can't take on Nvidia clusters without copying Nvidia's current architecture - Apple Silicon's current architecture is too inefficient to be a serious replacement to Nvidia clusters.

If Apple wanted to have a shot at entering the cluster game, that window of opportunity closed when Apple Silicon converged on simplified GPU designs. The 2w NPUs and compute shaders aren't going to make Nvidia scared, let alone compete with AMD's market share.

27. wbl ◴[] No.42179046{6}[source]
Power usage cost isn't the money but the capacity and cooling.
replies(1): >>42181611 #
28. metadat ◴[] No.42179813[source]
How many teraflops in an exaflop? The tera is screwing me up.. Google not helping today, so many cards.
replies(1): >>42180127 #
29. shmerl ◴[] No.42179870[source]
There is ZLUDA to break the lock-in for those who are stuck with it. The rest will use something else.
30. aaronblohowiak ◴[] No.42180127{3}[source]
https://en.m.wikipedia.org/wiki/Metric_prefix
31. jillesvangurp ◴[] No.42180214[source]
Software will bridge the gap. There are simply too many competing platforms out there that are not Nvidia based. Most decent AI libraries and frameworks already need to support more than just Nvidia. There's a reason macs are popular with AI researchers: many of these platforms support Apple's chips already and they perform pretty well. Anything that doesn't support those chips, is a problem waiting to be fixed with plenty of people working on fixing that. If it can be fixed for Apple's chips, it can also be fixed for other people's chips.

And of course there is some serious amount of money sloshing around in this space. Things being hard doesn't mean it's impossible. And there's no shortage of extremely well funded companies working on this stuff. All your favorite trillion $ companies basically. And most of them have their own AI chips too. And probably some reservations about perpetually handing a lot of their cash to Nvidia.

If you want an example of a company that used to have a gigantic moat that is now dealing with a lot of competition, look at Intel. X86 used to be that moat. And that's looking pretty weak lately. One reason that AMD is in the news a lot lately is that they are growing at Intel's expense. Nvidia might be their next target.

32. Wytwwww ◴[] No.42181611{7}[source]
Yes, I know that. Hence I quadrupled the price of electricity or are you saying that the cost of capacity and cooling doesn't scale directly with power usage?

We can increase that another 2x and the cost would still be relatively low compared to the price/deprecation of the GPU itself.

33. pjmlp ◴[] No.42182561[source]
Only if the only thing one cares about is CUDA C++, and not CUDA C, CUDA C++, CUDA Fortran, CUDA Anything PTX, plus libraries, IDE integration, GPU graphical debugging.
replies(1): >>42186939 #
34. pjmlp ◴[] No.42182571[source]
First they need to support everything that CUDA is capable of in programing language portfolio, tooling and libraries.
replies(1): >>42183003 #
35. pjmlp ◴[] No.42182611[source]
CUDA is much more than that, and missing that out is exactly why NVidia keeps winning.
replies(1): >>42184234 #
36. bryanlarsen ◴[] No.42183003{3}[source]
A typical LLM might use about 0.1% of CUDA. That's all that would have to be ported to get that LLM to work.
replies(1): >>42183651 #
37. pjmlp ◴[] No.42183651{4}[source]
Which is missing the point why CUDA has won.

Then again, maybe the goal is getting 0.1% of CUDA market share. /s

replies(2): >>42184109 #>>42184220 #
38. its_down_again ◴[] No.42184109{5}[source]
In the words of Gilfoyle-- I'll bite. Why has CUDA won?
replies(1): >>42184726 #
39. imtringued ◴[] No.42184220{5}[source]
Nvidia has won because their compute drivers don't crash people's systems when they run e.g. Vulkan Compute.

You are mostly listing irrelevant nice to have things that aren't deal breakers. AMD's consumer GPUs have a long history of being abandoned a year or two after release.

replies(1): >>42184735 #
40. imtringued ◴[] No.42184234{3}[source]
Again, I have AMD hardware and can't use it.
replies(1): >>42184701 #
41. pjmlp ◴[] No.42184701{4}[source]
AMD is to blame for where they stand.
42. pjmlp ◴[] No.42184726{6}[source]
CUDA C++, CUDA Fortran, CUDA Anything PTX, plus libraries, IDE integration, GPU graphical debugging.

Coupled with Khronos, Intel, AMD never delivering anything comparable with OpenCL, Apple losing interest after Khronos didn't took OpenCL into the direction they wanted, Google never adopting it favouring their Renderscript dialect.

43. pjmlp ◴[] No.42184735{6}[source]
CUDA C++, CUDA Fortran, CUDA Anything PTX, plus libraries, IDE integration, GPU graphical debugging, aren't only nice to have things.
44. vlovich123 ◴[] No.42186939{3}[source]
CUDA C works fine with HIP not sure what you're referring to. As for the other pieces, GPU graphical debugging isn't relevant for CUDA and I don't know what IDE integration is special / relevant for CUDA but AMD does have a ROCm debugger which I would imagine would be sufficient for simultaneous debugging of CPU & GPU. You won't get developer tools like nsight systems but I'm pretty sure AMD has equivalent tooling.

As for Fortran, that doesn't come up much in modern AI stuff. I haven't observed PTX / GCN assembly within AI codebases but maybe you have extra insight there.