←back to thread

200 points simonw | 1 comments | | HN request time: 0s | source
Show context
syntaxing ◴[] No.45649620[source]
Ehh, is it cool and time savings that it figured it out? Yes. But the solution was to get a “better” version prebuilt wheel package of PyTorch. This is a relatively “easy” problem to solve (figuring out this was the problem does take time). But it’s (probably, I can’t afford one) going to be painful when you want to upgrade the cuda version or specify a specific version. Unlike a typical PC, you’re going to need to build a new image and flash it. I would be more impressed when a LLM can do this end to end for you.
replies(2): >>45652848 #>>45662953 #
sh3rl0ck ◴[] No.45652848[source]
Pytorch + CUDA is a headache I've seen a lot of people have at my uni, and one I've never had to deal with thanks to uv. Good tooling really does go a long way in these things.

Although, I must say that for certain docker pass through cases, the debugging logs just aren't as detailed

replies(1): >>45661767 #
ComputerGuru ◴[] No.45661767[source]
uv doesn’t fundamentally solve the issues. It didn’t invent venv or pip.

What fundamentally solves the issue is to use an onnx version of the model.

replies(1): >>45661872 #
simonw ◴[] No.45661872{3}[source]
Do you know if it's possible to run ONNX versions of models on a Mac?

I should try those on the NVIDIA Spark, be interesting to see if they are easy to work with on ARM64.

replies(1): >>45664128 #
1. ComputerGuru ◴[] No.45664128{4}[source]
Yup. The beauty of it is that the underlying ai accelerator/hardware is completely abstracted away. There’s a CoreML ONNX execution provider, though I haven’t used it.

No more fighting with hardcoded cuda:0 everywhere.

The only pain point is that you’ll often have to manually convert a PyTorch model from huggingface to onnx unless it’s very popular.