←back to thread

1045 points mfiguiere | 2 comments | | HN request time: 0.799s | source
Show context
btown ◴[] No.39345221[source]
Why would this not be AMD’s top priority among priorities? Someone recently likened the situation to an Iron Age where NVIDIA owns all the iron. And this sounds like AMD knowing about a new source of ore and not even being willing to sink a single engineer’s salary into exploration.

My only guess is they have a parallel skunkworks working on the same thing, but in a way that they can keep it closed-source - that this was a hedge they think they no longer need, and they are missing the forest for the trees on the benefits of cross-pollination and open source ethos to their business.

replies(14): >>39345241 #>>39345302 #>>39345393 #>>39345400 #>>39345458 #>>39345853 #>>39345857 #>>39345893 #>>39346210 #>>39346792 #>>39346857 #>>39347433 #>>39347900 #>>39347927 #
fariszr ◴[] No.39345241[source]
According to the article, AMD seems to have pulled the plug on this as they think it will hinder ROCMv6 adoption, which still btw only supports two consumer cards out of their entire line up[1]

1. https://www.phoronix.com/news/AMD-ROCm-6.0-Released

replies(4): >>39345503 #>>39345558 #>>39346200 #>>39346480 #
kkielhofner ◴[] No.39345558[source]
With the most recent card being their one year old flagship ($1k) consumer GPU...

Meanwhile CUDA supports anything with Nvidia stamped on it before it's even released. They'll even go as far as doing things like adding support for new GPUs/compute families to older CUDA versions (see Hopper/Ada and CUDA 11.8).

You can go out and buy any Nvidia GPU the day of release, take it home, plug it in, and everything just works. This is what people expect.

AMD seems to have no clue that this level of usability is what it will take to actually compete with Nvidia and it's a real shame - their hardware is great.

replies(5): >>39345774 #>>39345894 #>>39346438 #>>39346550 #>>39346788 #
Certhas ◴[] No.39346438[source]
The most recent "card" is their MI300 line.

It's annoying as hell to you and me that they are not catering to the market of people who want to run stuff on their gaming cards.

But it's not clear it's bad strategy to focus on executing in the high-end first. They have been very successful landing MI300s in the HPC space...

Edit: I just looked it up: 25% of the GPU Compute in the current Top500 Supercomputers is AMD

https://www.top500.org/statistics/list/

Even though the list has plenty of V100 and A100s which came out (much) earlier. Don't have the data at hand, but I wouldn't be surprised if AMD got more of the Top500 new installations than nVidia in the last two years.

replies(2): >>39347800 #>>39351254 #
1. kkielhofner ◴[] No.39351254[source]
Indeed, but this is extremely short-sighted.

You don't win an overall market by focusing on several hundred million dollar bespoke HPC builds where the platform (frankly) doesn't matter at all. I'm working on a project on an AMD platform on the list (won't say - for now) and needless to say you build whatever you have to what's there, regardless of what it takes and the operators/owners and vendor support teams pour in whatever resources are necessary to make it work.

You win a market a generation at a time - supporting low end cards for tinkerers, the educational market, etc. AMD should focus on the low-end because that's where the next generation of AI devs, startups, innovation, etc is coming from and for now that's going to continue to be CUDA/Nvidia.

replies(1): >>39356010 #
2. Certhas ◴[] No.39356010[source]
Those people didn't build the CUDA ecosystem. nVidia, Google and Facebook did. I think your hypothesis is pretty self-serving.

nVidia is dominant now. The question is, what's your wedge.