Most active commenters
  • pjmlp(5)
  • coffeeaddict1(5)
  • keldaris(4)
  • fragmede(3)

←back to thread

Rust CUDA Project

(github.com)
146 points sksxihve | 19 comments | | HN request time: 0.001s | source | bottom
Show context
shmerl ◴[] No.43656833[source]
Looks like a dead end. Why CUDA? There should be some way to use Rust for GPU programming in general fashion, without being tied to Nvidia.
replies(5): >>43656967 #>>43657008 #>>43657034 #>>43658709 #>>43659892 #
1. pjmlp ◴[] No.43657034[source]
Because others so far have failed to deliver anything worthwhile using, with the same tooling ecosystem as CUDA.
replies(3): >>43657851 #>>43658002 #>>43658007 #
2. coffeeaddict1 ◴[] No.43657851[source]
While I agree, that CUDA is the best in class API for GPU programming, OpenCL, Vulkan compute shaders and Sycl are alternatives that are usable. I'm for example, using compute shaders for writing GPGPU algorithms that work on Mac, AMD, Intel and Nvidia. It works ok. The debugging experience and ecosystem sucks compared to CUDA, but being able to run the algorithms across platforms is a huge advantage over CUDA.
replies(3): >>43658021 #>>43658035 #>>43658602 #
3. ◴[] No.43658002[source]
4. shmerl ◴[] No.43658007[source]
To deliver, you need to make Rust target the GPU in a general way, like some IR, and then may be compile that into GPU machine code for each GPU architecture specifically.

So this project is a dead end, because it's them who are these "others" - they are developing it and they are doing it wrong.

replies(1): >>43658699 #
5. keldaris ◴[] No.43658021[source]
How are you writing compute shaders that work on all platforms, including Mac? Are you just writing Vulkan and relying on MoltenVK?

AFAIK, the only solution that actually works on all major platforms without additional compatibility layers today is OpenCL 1.2 - which also happens to be officially deprecated on MacOS, but still works for now.

replies(2): >>43658633 #>>43658666 #
6. fragmede ◴[] No.43658035[source]
why do you need to run across all those platforms? what's the cost benefit for doing so?
replies(1): >>43658724 #
7. pjmlp ◴[] No.43658602[source]
No they aren't, because they lack the polyglot support from CUDA and as you acknowledge the debugging experience and ecosystem sucks.
8. pjmlp ◴[] No.43658633{3}[source]
And is stuck with C99, versus C++20, Fortran, Julia, Haskell, C#, anything else someone feels like targeting PTX with.
replies(1): >>43658760 #
9. coffeeaddict1 ◴[] No.43658666{3}[source]
Yes, MoltenVK works fine. Alternatively, you can also use WebGPU (there are C++ and Rust native libs) which is a simpler but more limiting API.
replies(1): >>43658775 #
10. pjmlp ◴[] No.43658699[source]
Plus IDE support, Nsight level debugging, GPU libraries, yes most likely bound to fail unless NVidia, like it happened with other languages sees enough business value to give an helping hand.

They are already using Rust in Dynamo, even though the public API is Python.

11. coffeeaddict1 ◴[] No.43658724{3}[source]
Well it really depends on the kind of work you're doing. My (non-AI) software allows users to run my algorithms on whatever server-side GPU or local device they have. This is a big advantage IMO.
replies(1): >>43659681 #
12. keldaris ◴[] No.43658760{4}[source]
Technically, OpenCL can also include inline PTX assembly in kernels (unlike any compute shader API I've ever seen), which is relevant for targeting things like tensor cores. You're absolutely right about the language limitation, though.
replies(1): >>43662463 #
13. keldaris ◴[] No.43658775{4}[source]
WebGPU has no support for tensor cores (or their Apple Silicon equivalents). Vulkan has an Nvidia extension for it, is there any way to make MoltenVK use simdgroup_matrix instructions in compute shaders?
replies(1): >>43658912 #
14. coffeeaddict1 ◴[] No.43658912{5}[source]
AFAIK, MoltenVK doesn't. Dawn (Google's C++ WebGPU implementation) does have some experimental support for it [0][1].

[0] https://issues.chromium.org/issues/348702031

[1] https://github.com/gpuweb/gpuweb/issues/4195

15. fragmede ◴[] No.43659681{4}[source]
interesting! Can you say more about what kind of algorithms your software runs?
replies(1): >>43662446 #
16. coffeeaddict1 ◴[] No.43662446{5}[source]
My work is primarily about the processing of medical images (which usually are large 3D images). Doing this on the GPU, can be up to 10-20x faster.
replies(1): >>43667306 #
17. pjmlp ◴[] No.43662463{5}[source]
At which point why bother, PTX is CUDA.
replies(1): >>43666392 #
18. keldaris ◴[] No.43666392{6}[source]
Generally, the reason to bother with this approach is if you have a project that only needs tensor cores in a tiny part of the code and otherwise benefits from the cross platform nature of OpenCL, so you have a mostly shared codebase with a small vendor-specific optimization in a kernel or two. I've been in that situation and do find that approach valuable, but I'll be the first to admit the modern GPGPU landscape is full of unpleasant compromises whichever way you look.
19. fragmede ◴[] No.43667306{6}[source]
But what about that wants to be multi-platform instead of picking one an specializing, probably picking up some more optimizations along the way?