Most active commenters

    ←back to thread

    Rust CUDA Project

    (github.com)
    146 points sksxihve | 12 comments | | HN request time: 1.386s | source | bottom
    1. the__alchemist ◴[] No.43656376[source]
    Summary, from someone who uses CUDA on rust in several projects (Computational chemistry and cosmology simulations):

      - This lib has been in an unusable and unmaintained state for years. I.e., to get it working, you need to use specific, several-years-old variants of both rustc, and CUDA.
      - It was recently rebooted. I haven't tried the Github branch, but there isn't a release yet. Has anyone verified if this is working on current Rustc and CUDA yet?
      - The Cudarc library (https://github.com/coreylowman/cudarc) is actively maintained, and works well. It does not, however, let you share host and device data structures; you will [de]serialize as a byte stream, using functions the lib provides. Works on any (within past few years at least) CUDA version and GPU.
    
    I highlight this as a trend I see in software libs, in Rust more than others: The projects that are promoted the most are often not the most practical or well-managed ones. It's not clear from the description, but maybe rust-CUDA intends to allow shared data structures between host and device? That would be nice.
    replies(5): >>43656540 #>>43656624 #>>43656639 #>>43658890 #>>43659897 #
    2. sksxihve ◴[] No.43656540[source]
    I think that's true in most newer languages, there's always a rush of libraries once a language starts to get popular, for example Go has lots http client libraries even though it also has an http library in the standard library.

    relevant xkcd, https://xkcd.com/927/

    replies(1): >>43656696 #
    3. hobofan ◴[] No.43656624[source]
    Damn. I transfered ownership over the cudnn and cudnn-sys crates (they are by now almost 10 year old crates that I'm certain nobody ever managed to use them for anything useful) to the maintainers a few years back as it looked to be on a good trajectory, but it seems like they never managed to actually release the crates. Hope that the reboot pulls through!
    4. gbin ◴[] No.43656639[source]
    We observed the same thing here at Copper Robotics where we absolutely need to have good Cuda bindings for our customers and in general the lack thereof has been holding back Rust in robotics for years. Finally with cudarc we have some hope for a stable project that keeps up with the ecosystem. The last interesting question at that point is why Nvidia is not investing in the rust ecosystem?
    replies(2): >>43657179 #>>43658584 #
    5. pests ◴[] No.43656696[source]
    I think this also was in small part due to them (Rob Pike perhaps? Or Brad) live-streaming them creating an http server back in the early days and it was good tutorial fodder.
    6. adityamwagh ◴[] No.43657179[source]
    I was talking to one person from the CUDA Core Compute Libraries team. They hinted that in the next 5 years, NVIDIA could support Rust as a language to program CUDA GPUs.

    I also read a comment on a post on r/Rust that Rust’s safe nature makes it hard to use it to program GPUs. Don’t know the specifics.

    Let’s see how it happens!

    7. pjmlp ◴[] No.43658584[source]
    They kind of are, but not in CUDA directly.

    https://github.com/ai-dynamo/dynamo

    > NVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments.

    > Built in Rust for performance and in Python for extensibility,

    Says right there where they see Rust currently.

    replies(1): >>43690124 #
    8. efnx ◴[] No.43658890[source]
    I’m a rust-GPU maintainer and can say that shared types on host and GPU are definitely intended. We’ve mostly been focused on graphics, but are shifting efforts to more general compute. There’s a lot of work though, and we all have day jobs - we’re looking for help. If you’re interested in helping you should say so at our GitHub.
    replies(1): >>43659137 #
    9. the__alchemist ◴[] No.43659137[source]
    What is the intended distinguisher between this and WGPU for graphics? I didn't realize that was a goal; have seen it mostly discussed in context of CUDA. There doesn't have to be, but I'm curious, as the CUDA/GPGPU side of the ecosystem is less developed, while catching up to WGPU may be a tall order. From a skim of its main page, it seems like it may also focus on writing shaders in rust.

    Tangent; What is the intended distinguishes between Rust-CUDA, and Cudarc? Rust shaders with shared data structures I'm guessing is the big one. That would be great! There of course doesn't have to be. More tools to choose from, and that encourages progress from each other.

    replies(1): >>43660060 #
    10. LegNeato ◴[] No.43659897[source]
    Maintainer here. It works on recent rust and latest CUDA. See https://rust-gpu.github.io/blog/2025/03/18/rust-cuda-update
    11. LegNeato ◴[] No.43660060{3}[source]
    wgpu is CPU side, rust-gpu is GPU side. The projects work together (our latest post uses wgpu and we fixed bugs in it: https://rust-gpu.github.io/blog/2025/04/10/shadertoys )
    12. adityamwagh ◴[] No.43690124{3}[source]
    Oh, pretty cool. I didn’t know about this development.