←back to thread

43 points robertnishihara | 8 comments | | HN request time: 0.207s | source | bottom
1. lz400 ◴[] No.44394984[source]
Unfortunately uv is usually insufficient for certain ML deployments in Python. It's a real pain to install pytorch/CUDA with all the necessary drivers and C++ dependencies so people tend to fall back to conda.

Any modern tips / life hacks for this situation?

replies(4): >>44395089 #>>44395516 #>>44395563 #>>44395700 #
2. devjab ◴[] No.44395089[source]
https://docs.astral.sh/uv/guides/integration/pytorch/#automa...

doesn't work?

replies(1): >>44395302 #
3. lz400 ◴[] No.44395302[source]
the problem is that you still need to install all the low level stuff manually, conda does it automatically
replies(2): >>44395469 #>>44395818 #
4. gcarvalho ◴[] No.44395469{3}[source]
I was pleasantly surprised to try the guide out and see that it just worked:

    λ uv venv && uv pip install torch --torch-backend=auto
    λ uv run python -c 'import torch; print(torch.cuda.is_available())'
    True
This is on Debian stable, and I don't remember doing any special setup other than installing the proprietary nvidia driver.
5. Kydlaw ◴[] No.44395516[source]
You should give a try to https://pixi.sh/latest/ (I am not involve in the project).

They are a little more focus on scientific computing than uv, which is more general. They might be a better option in your case.

6. miohtama ◴[] No.44395563[source]
Would it be possible to use Docker to manage native dependencies?
7. rsfern ◴[] No.44395700[source]
Are there particular libraries that make your setup difficult? I just manually set the index and source following the docs (didn’t know about the auto backend feature) and pin a specific version if I really have to with `uv add “torch==2.4”`. This works pretty well for me for projects that use dgl, which heavily uses C++ extensions and can be pretty finicky about working with particular versions

This is in a conventional HPC environment, and I’ve found it way better than conda since the dependency solves are so much faster and I no longer experience PyTorch silently getting downgraded to cpu version of I install a new library. Maybe I’ve been using conda poorly though?

8. pcwelder ◴[] No.44395818{3}[source]
This script has been sufficient for me to configure gpu drivers on fresh ubuntu machines. It's just uv add torch after this.

https://cloud.google.com/compute/docs/gpus/install-drivers-g... (NOTE: not gcloud specific)