←back to thread

1045 points mfiguiere | 2 comments | | HN request time: 0.535s | source
Show context
Keyframe ◴[] No.39347045[source]
This event of release is however a result of AMD stopped funding it per "After two years of development and some deliberation, AMD decided that there is no business case for running CUDA applications on AMD GPUs. One of the terms of my contract with AMD was that if AMD did not find it fit for further development, I could release it. Which brings us to today." from https://github.com/vosen/ZLUDA?tab=readme-ov-file#faq

so, same mistake intel made before.

replies(8): >>39348941 #>>39349405 #>>39349842 #>>39350224 #>>39351024 #>>39351568 #>>39352021 #>>39360332 #
tgsovlerkhgsel ◴[] No.39351568[source]
How is this not priority #1 for them, with NVIDIA stock shooting to the moon because everyone does machine learning using CUDA-centric tools?

If AMD could get 90% of the CUDA ML stuff to seamlessly run on AMD hardware, and could provide hardware at a competitive cost-per-performance (which I assume they probably could since NVIDIA must have an insane profit margin on their GPUs), wouldn't that be the opportunity to eat NVIDIA's lunch?

replies(6): >>39351635 #>>39351685 #>>39352131 #>>39353521 #>>39354718 #>>39360463 #
1. make3 ◴[] No.39351635[source]
it's a common misconception that deep learning stuff is built in cuda. it's actually built on CUDNN kernels that don't use cuda but are actually gpu assembly written by hand by phds. I'm really not convinced that this project here would be able to be used for this. the ROCm kernels that are analogue to cudnn though, yes
replies(1): >>39354982 #
2. abbra ◴[] No.39354982[source]
This project relies on ROCm for all it's CUDNN magic.