←back to thread

183 points spacebanana7 | 6 comments | | HN request time: 0.001s | source | bottom

I appreciate developing ROCm into something competitive with CUDA would require a lot of work, both internally within AMD and with external contributions to the relevant open source libraries.

However the amount of resources at stake is incredible. The delta between NVIDIA's value and AMD's is bigger than the annual GDP of Spain. Even if they needed to hire a few thousand engineers at a few million in comp each, it'd still be a good investment.

Show context
whywhywhywhy ◴[] No.43547335[source]
The answer is in the question, because if they had the foresight to do such a thing the tech would already be here, instead they thought 1 dimensionally about their product, were part of the group that fumbled OpenCL and now they're a decade behind playing catch up.
replies(2): >>43547379 #>>43547882 #
1. bluGill ◴[] No.43547379[source]
A good group can catch up significantly in 2 years. They will still be behind, but if they are cheaper (or just you can buy them) that would still go a long way.
replies(3): >>43547589 #>>43547724 #>>43547858 #
2. whatever1 ◴[] No.43547589[source]
I think even with the trashy api and drivers if they release graphic cards with 4x the memory of the nvidia equivalents the community would put the effort to make them work.
replies(2): >>43547800 #>>43549422 #
3. jsight ◴[] No.43547724[source]
I think that it is really hard to be cheaper in the ways that really matter. Performance per watt matters a lot here, and NVidia is excellent at this. It doesn't seem like anyone else will be able to compete within at least the next couple of years.
4. JohnBooty ◴[] No.43547800[source]
Yeah. Easier said than done, I know, but they need to not just catch up to nVidia but leapfrog them somehow.

I would have said that releasing cards with 32GB+ of onboard RAM, or better yet 128GB, would have gotten things moving. They'd be able to run/train models that nVidia's consumer cards couldn't.

But I think nVidia closed that gap with their "Project Digits" (or whatever the final name is) PCs.

5. bryanlarsen ◴[] No.43547858[source]
"good group" is carrying a lot of weight here. You can't buy that. You can buy good small groups, but AMD needs a good large group, and that can't be bought.
6. DrNosferatu ◴[] No.43549422[source]
This: provide cards with extra larger VRAM pools than the competition - to provide a real edge in LLM inferencing - and the users will come.

This happened with bitcoin.