←back to thread

326 points amazonhut | 1 comments | | HN request time: 0.241s | source
Show context
untrimmed ◴[] No.45248154[source]
As someone who has spent days wrestling with Python dependency hell just to get a model running, a simple cargo run feels like a dream. But I'm wondering, what was the most painful part of NOT having a framework? I'm betting my coffee money it was debugging the backpropagation logic.
replies(5): >>45248223 #>>45248315 #>>45248416 #>>45248640 #>>45248972 #
ricardobeat ◴[] No.45248416[source]
Have you tried uv [1]? It has removed 90% of the pain of running python projects for me.

[1] https://github.com/astral-sh/uv

replies(4): >>45248587 #>>45248888 #>>45249600 #>>45250338 #
DiabloD3 ◴[] No.45248587[source]
uv is great, but I think the real fix is just abandoning Python.

The culture that language maintains is rather hostile to maintainable development, easier to just switch to Rust and just write better code by default.

replies(6): >>45248612 #>>45248634 #>>45248782 #>>45249308 #>>45249966 #>>45252079 #
airza ◴[] No.45248612[source]
There's not really another game in town if you want to do fast ML development :/
replies(3): >>45248718 #>>45249873 #>>45252088 #
DiabloD3 ◴[] No.45248718[source]
Dunno, almost all of the people I know anywhere in the ML space are on the C and Rust end of the spectrum.

Lack of types, lack of static analysis, lack of ... well, lack of everything Python doesn't provide and fights users on costs too much developer time. It is a net negative to continue pouring time and money into anything Python-based.

The sole exclusion I've seen to my social circle is those working at companies that don't directly do ML, but provide drivers/hardware/supporting software to ML people in academia, and have to try to fix their cursed shit for them.

Also, fwiw, there is no reason why Triton is Python. I dislike Triton for a lot of reasons, but its just a matmul kernel DSL, there is nothing inherent in it that has to be, or benefits from, being Python.... it takes DSL in, outputs shader text out, then has the vendor's API run it (ie, CUDA, ROCm, etc). It, too, would benefit from becoming Rust.

replies(4): >>45249457 #>>45249615 #>>45249713 #>>45251217 #
airza ◴[] No.45249713[source]
Okay. Humor me. I want to write a transformer-based classifier for a project. I am accustomed to the pytorch and tensorflow libraries. What is the equivalent using C?
replies(1): >>45250444 #
adastra22 ◴[] No.45250444[source]
You do know that tensorflow was written in C++ and the Python API bolted on top?
replies(2): >>45251116 #>>45253253 #
airza ◴[] No.45253253[source]
I am. Are you suggesting that as an alternative to the python bindings i should use C to invoke the C++ ABI for tensorflow?
replies(1): >>45254276 #
1. adastra22 ◴[] No.45254276[source]
> Okay. Humor me. I want to write a transformer-based classifier for a project. I am accustomed to the pytorch and tensorflow libraries. What is the equivalent using C?

Use C++ bindings in libtorch or tensorflow. If you actually mean C, and not C++, then you would need a shim wrapper. C++ -> C is pretty easy to do.