←back to thread

195 points rbanffy | 3 comments | | HN request time: 0s | source
Show context
pie420 ◴[] No.42176400[source]
layperson with no industry knowledge, but it seems like nvidia's CUDA moat will fall in the next 2-5 years. It seems impossible to sustain those margins without competition coming in and getting a decent slice of the pie
replies(5): >>42176440 #>>42177575 #>>42177944 #>>42178259 #>>42179625 #
metadat ◴[] No.42176440[source]
But how will AMD or anyone else push in? CUDA is actually a whole virtualization layer on top of the hardware and isn't easily replicable, Nvidia has been at it for 17 years.

You are right, eventually something's gotta give. The path for this next leg isn't yet apparent to me.

P.s. how much is an exaflop or petaflop, and how significant is it? The numbers thrown around in this article don't mean anything to me. Is this new cluster way more powerful than the last top?

replies(14): >>42176567 #>>42176711 #>>42176809 #>>42177061 #>>42177287 #>>42177319 #>>42177378 #>>42177451 #>>42177452 #>>42177477 #>>42177479 #>>42178108 #>>42179870 #>>42180214 #
sangnoir ◴[] No.42176809[source]
CUDA is the assembly to Torch's high-level language; for most, it's a very good intermediary, but an intermediary nonetheless, as it is between the actual code they are interested in, and the hardware that runs it.

Most customers care about cost-effectiveness more than best-in-class raw-performance, a fact that AMD has ruthlessly exploited over the past 8 years. It helps that AMD products are occasionally both.

replies(1): >>42182611 #
1. pjmlp ◴[] No.42182611[source]
CUDA is much more than that, and missing that out is exactly why NVidia keeps winning.
replies(1): >>42184234 #
2. imtringued ◴[] No.42184234[source]
Again, I have AMD hardware and can't use it.
replies(1): >>42184701 #
3. pjmlp ◴[] No.42184701[source]
AMD is to blame for where they stand.