←back to thread

1045 points mfiguiere | 1 comments | | HN request time: 0.001s | source
Show context
btown ◴[] No.39345221[source]
Why would this not be AMD’s top priority among priorities? Someone recently likened the situation to an Iron Age where NVIDIA owns all the iron. And this sounds like AMD knowing about a new source of ore and not even being willing to sink a single engineer’s salary into exploration.

My only guess is they have a parallel skunkworks working on the same thing, but in a way that they can keep it closed-source - that this was a hedge they think they no longer need, and they are missing the forest for the trees on the benefits of cross-pollination and open source ethos to their business.

replies(14): >>39345241 #>>39345302 #>>39345393 #>>39345400 #>>39345458 #>>39345853 #>>39345857 #>>39345893 #>>39346210 #>>39346792 #>>39346857 #>>39347433 #>>39347900 #>>39347927 #
hjabird ◴[] No.39345853[source]
The problem with effectively supporting CUDA is that encourages CUDA adoption all the more strongly. Meanwhile, AMD will always be playing catch-up, forever having to patch issues, work around Nvidia/AMD differences, and accept the performance penalty that comes from having code optimised for another vendor's hardware. AMD needs to encourage developers to use their own ecosystem or an open standard.
replies(13): >>39345944 #>>39346147 #>>39346166 #>>39346182 #>>39346270 #>>39346295 #>>39346339 #>>39346835 #>>39346941 #>>39346971 #>>39347964 #>>39348398 #>>39351785 #
bachmeier ◴[] No.39346147[source]
> The problem with effectively supporting CUDA is that encourages CUDA adoption all the more strongly.

I'm curious about this. Sure some CUDA code has already been written. If something new comes along that provides better performance per dollar spent, why continue writing CUDA for new projects? I don't think the argument that "this is what we know how to write" works in this case. These aren't scripts you want someone to knock out quickly.

replies(2): >>39346290 #>>39346821 #
Uehreka ◴[] No.39346290[source]
> If something new comes along that provides better performance per dollar spent

They won’t be able to do that, their hardware isn’t fast enough.

Nvidia is beating them at hardware performance, AND ALSO has an exclusive SDK (CUDA) that is used by almost all deep learning projects. If AMD can get their cards to run CUDA via ROCm, then they can begin to compete with Nvidia on price (though not performance). Then, and only then, if they can start actually producing cards with equivalent performance (also a big stretch) they can try for an Embrace Extend Extinguish play against CUDA.

replies(1): >>39346889 #
bachmeier ◴[] No.39346889{3}[source]
> They won’t be able to do that, their hardware isn’t fast enough.

Well, then I guess CUDA is not really the problem, so being able to run CUDA on AMD hardware wouldn't solve anything.

> try for an Embrace Extend Extinguish play against CUDA

They wouldn't need to go that route. They just need a way to run existing CUDA code on AMD hardware. Once that happens, their customers have the option to save money by writing ROCm or whatever AMD is working on at that time.

replies(4): >>39347039 #>>39347129 #>>39349668 #>>39356235 #
1. Qwertious ◴[] No.39356235{4}[source]
>so being able to run CUDA on AMD hardware wouldn't solve anything.

It limits Nvidia's profit margin - if Nvidia cards run twice as fast but cost more than twice as much, then people will just buy two AMD cards. Meanwhile, it gives AMD some revenue with which to fund an improved CUDA stack.

>their customers have the option to save money by writing ROCm

CUDA saves money by having a fuckton of pre-written CUDA code and being supported as default basically everywhere.