←back to thread

1045 points mfiguiere | 4 comments | | HN request time: 0.911s | source
Show context
lambdaone ◴[] No.39345937[source]
It seems to me that AMD are crazy to stop funding this. CUDA-on-ROCm breaks NVIDIA's moat, and would also act as a disincentive for NVIDIA to make breaking changes to CUDA; what more could AMD want?

When you're #1, you can go all-in on your own proprietary stack, knowing that network effects will drive your market share higher and higher for you for free.

When you're #2, you need to follow de-facto standards and work on creating and following truly open ones, and try to compete on actual value, rather than rent-seeking. AMD of all companies should know this.

replies(3): >>39346130 #>>39346284 #>>39351222 #
1. saboot ◴[] No.39346284[source]
Yep, I develop several applications that use CUDA. I see AMD/Radeon powered computers for sale and want to buy one, but I am not going to risk not being able to run those applications or having to rewrite them.

If they want me as a customer, and they have not created a viable alternative to CUDA, they need to pursue this.

replies(1): >>39351051 #
2. weebull ◴[] No.39351051[source]
Define "viable"?
replies(1): >>39353549 #
3. croutons ◴[] No.39353549[source]
A backend that runs PyTorch out of the box and is as easy to setup / use as nvidia stack.
replies(1): >>39375285 #
4. weebull ◴[] No.39375285{3}[source]
Installing PyTorch with the PyTorch website instructions for AMD was pretty painless for me on Linux. I know everybodies experience is different, but install wasn't the issue for me.

For me the issue on AMD was stability in situations when VRAM was getting tight.