Most active commenters

    ←back to thread

    1045 points mfiguiere | 17 comments | | HN request time: 0.757s | source | bottom
    Show context
    Keyframe ◴[] No.39347045[source]
    This event of release is however a result of AMD stopped funding it per "After two years of development and some deliberation, AMD decided that there is no business case for running CUDA applications on AMD GPUs. One of the terms of my contract with AMD was that if AMD did not find it fit for further development, I could release it. Which brings us to today." from https://github.com/vosen/ZLUDA?tab=readme-ov-file#faq

    so, same mistake intel made before.

    replies(8): >>39348941 #>>39349405 #>>39349842 #>>39350224 #>>39351024 #>>39351568 #>>39352021 #>>39360332 #
    1. tgsovlerkhgsel ◴[] No.39351568[source]
    How is this not priority #1 for them, with NVIDIA stock shooting to the moon because everyone does machine learning using CUDA-centric tools?

    If AMD could get 90% of the CUDA ML stuff to seamlessly run on AMD hardware, and could provide hardware at a competitive cost-per-performance (which I assume they probably could since NVIDIA must have an insane profit margin on their GPUs), wouldn't that be the opportunity to eat NVIDIA's lunch?

    replies(6): >>39351635 #>>39351685 #>>39352131 #>>39353521 #>>39354718 #>>39360463 #
    2. make3 ◴[] No.39351635[source]
    it's a common misconception that deep learning stuff is built in cuda. it's actually built on CUDNN kernels that don't use cuda but are actually gpu assembly written by hand by phds. I'm really not convinced that this project here would be able to be used for this. the ROCm kernels that are analogue to cudnn though, yes
    replies(1): >>39354982 #
    3. pheatherlite ◴[] No.39351685[source]
    The only reason our lab bought 20k worth of Nvidia gpu cards rather than amd was the cuda industry standard (might as wellbe). It's kind of mind boggling how much business amd must be losing over this.
    replies(3): >>39351920 #>>39352937 #>>39354728 #
    4. Rafuino ◴[] No.39351920[source]
    So, your lab bought ~1 GPU?
    replies(3): >>39352194 #>>39352979 #>>39353215 #
    5. llm_trw ◴[] No.39352131[source]
    Never underestimate AMD's ability to fail.

    Ryzen was a surprise to everyone not because it was good, but because they didn't fuck it up within two generations.

    AMD cards have more raw compute than nvidia, they are better than nvidia, yet the software is so bad that I gave up on using it and switched to nvidia. Two weeks of debugging driver errors vs 30 minutes of automated updates.

    replies(1): >>39353165 #
    6. polygamous_bat ◴[] No.39352194{3}[source]
    Hey stop shaming the GPU poor, not everyone is Mark Zuckerberg ordering $8bn. of GPUs.
    7. Modified3019 ◴[] No.39352937[source]
    That was a good decision. The amount of lamenting engineers I’ve seen over the years who’ve been given the task of trying to get more affordable AMD cards to work with enterprise functionality is nontrivial. AMD nearly borders on hostility with its silence, even if you want to throw millions at them, it’s insane.

    At least Nvidia, which I fucking hate, will happily hold out their hand for cash even from individuals.

    So now we’re in a hilarious situation where people from hobbyists to enterprise devs are hoping for intel to save the day.

    8. paulmd ◴[] No.39352979{3}[source]
    or a rack of 3090s/4090s or quadros

    (the "no datacenter" clause obviously excludes workstations, and the terms of this license cannot be applied to the open kernel driver since it's GPL'd)

    9. tormeh ◴[] No.39353165[source]
    It's rather shocking that with RADV, Valve (mostly) has written a better RDNA2 driver than AMD has managed for their own cards. Besides the embarrassment, AMD is leaving tons of performance and therefore market share on the table. You have to wonder wtf is going on over at AMD.
    replies(1): >>39353346 #
    10. exikyut ◴[] No.39353215{3}[source]
    Hey, I should go play with those workstation/server configurators now they'll have been updated to supply A100Xs and such...
    11. dralley ◴[] No.39353346{3}[source]
    RADV was started by David Arlie of Red Hat, although Valve been dedicating some very significant resources over the past few years.
    12. HarHarVeryFunny ◴[] No.39353521[source]
    IMO the trouble is that CUDA is too low level to allow emulation without a major loss of performance, and even if there was a choice of CUDA-compatible vendors, people are ultimately going to vote with their wallets. It's not enough to be compatible - you need to be compatible while providing the same or better performance (else why not just use NVIDIA).

    A better level to target compatibility would be at the framework level such as PyTorch, where the building blocks of neural networks (convolution, multi-head attention, etc, etc) are high level and abstract enough to allow flexibility in mapping them onto AMD hardware without compromising performance.

    However, these frameworks are forever changing and playing continual catch-up there still wouldn't be a great place to be, especially without a large staff dedicated to the effort (writing hand-optimized kernels), which AMD don't seem to be able/willing to muster.

    So, finally, perhaps the strategically best place for AMD to invest would be in compilers and software tools to allow kernels to be written in a high level language. Becoming a first class Mojo target wouldn't be a bad place to start, assuming they are not already in partnership.

    replies(1): >>39355582 #
    13. test6554 ◴[] No.39354718[source]
    Nvidia controls CUDA the software spec, Nvidia also controls the hardware CUDA runs on. The industry adopts CUDA standards and uses the latest features.

    AMD cannot keep up with arbitrarily changing hardware and software while trying to please developers that want what was just released. They would always be a generation behind at tremendous expense.

    14. up2isomorphism ◴[] No.39354728[source]
    Your “lab” does not sound like a lab in the classical sense.
    15. abbra ◴[] No.39354982[source]
    This project relies on ROCm for all it's CUDNN magic.
    16. hnfong ◴[] No.39355582[source]
    > However, these frameworks are forever changing and playing continual catch-up there still wouldn't be a great place to be, especially without a large staff dedicated to the effort (writing hand-optimized kernels), which AMD don't seem to be able/willing to muster.

    The situation in reality is quite actually quite bad.

    Given that I have a M2 Max and no nVidia cards, I've tried enough PyTorch-based ML libraries that at some point, I basically expect them to flat out show an error saying CUDA 10.x+ is required once the dependencies are installed (eg. one of them being the bitsandbytes library -- in fairness, there's apparently some effort trying to port the code to other platforms as well).

    As of today, the whole field is moving too fast that it's simply not worth it for a solo dev or even a small team to even attempt getting a non-CUDA stack up and running, especially with the other major GPU vendors not (able to?) hiring people to port the hand-optimized CUDA kernels.

    Hopefully the situation will change after these couple years of frenzy, but in the time being I don't see any viable way to avoid using a CUDA stack if one is serious with getting ML stuff done.

    17. ◴[] No.39360463[source]