←back to thread

365 points tanelpoder | 1 comments | | HN request time: 0s | source
Show context
alkh ◴[] No.44978054[source]
I enjoy using uv a lot but am getting afraid that it is getting bloated for no reason. For ex., the number of niche flags that a lot of subcommands support is very high + some of them seemingly achieve the same result(uv run --no-project and uv run --active). I'd rather them working on improving existing tools and documentation than adding new (redundant) functionality
replies(4): >>44978481 #>>44978543 #>>44981333 #>>44986265 #
benreesman ◴[] No.44978543[source]
It's really difficult to do Python projects in a sound, reproducible, reasonably portable way. uv sync is in general able to build you only a package set that it can promise to build again.

But it can't in general build torch-tensorrt or flash-attn because it has no way of knowing if Mercury was in retrograde when you ran pip. They are trying to thread a delicate an economically pivotal needle: the Python community prizes privatizing the profits and socializing the costs of "works on my box".

The cost of making the software deployable, secure, repeatable, reliable didn't go away! It just became someone else's problem at a later time in a different place with far fewer degrees of freedom.

Doing this in a way that satisfies serious operations people without alienating the "works on my box...sometimes" crowd is The Lord's Work.

replies(1): >>44980418 #
kouteiheika ◴[] No.44980418[source]
> But it can't in general build torch-tensorrt or flash-attn because it has no way of knowing if Mercury was in retrograde when you ran pip.

This is a self-inflicted wound, since flash attention insist on building a native C++ extension which is completely unnecessary in this case.

What you can do is the following:

1) Compile your CUDA kernels offline. 2) Include those compiled kernels in a package you push to pypi. 3) Call into the kernels with pure Python, without going through a C++ extension.

I do this for the CUDA kernels I maintain and it works great.

Flash attention currently publishes 48 (!) different packages[1], for different combinations of pytorch and C++ ABI. With this approach it would have to only publish one, and it would work for every combination of Python and pytorch.

[1] - https://github.com/Dao-AILab/flash-attention/releases/tag/v2...

replies(1): >>44981179 #
twothreeone ◴[] No.44981179[source]
While shipping binary kernels may be a workaround for some users, it goes against what many people would consider "good etiquette" for various valid reasons, such as hackability, security, or providing free (as in liberty) software.
replies(2): >>44982899 #>>44983619 #
rkangel ◴[] No.44982899[source]
Shipping binary artifacts isn't inherently a bad thing - that's what (most) Linux Distros do after all! The important distinction is how that binary package was arrived at. If it's a mystery and the source isn't available then that's bad. If it's all in the open source repo and part of the Python package build and is completely reproducible then that's great.
replies(1): >>44983515 #
benreesman ◴[] No.44983515[source]
GP's solution is the one that I (and I think most people) ultimately wind up using. But it's not a nice time in the usual "oh we'll just compile it" sense of a typical package.

flash-attn in particular has its build so badly misconfigured and is so heavy that it will lock up a modern Zen 5 machine with 128GB of DDR5 if you don't re-nice ninja (assuming of course that you remembered it just won't work without a pip-visible ninja). It can't build a wheel (at least not obviously) that will work correctly on Ampere and Hopper both, it incorrectly declares it's dependencies so it will demand torch even if torch is in your pyproject.toml and you end up breaking build isolation.

So now you've got your gigabytes of fragile wheel that won't run on half your cards, let's make a wheel registry. Oh, and machine learning everything needs it: half of diffusers crashes at runtime without it. Runtime.

The dirty little secret of these 50MM offers at AI companies is that way more people understand the math (which is actually pretty light compared to say graduate physics) than can build and run NVIDIA wheels at scale. The wizards who Zuckerberg will fellate are people who know some math and can run Torch on a mixed Hopper/Blackwell fleet.

And this (I think) is Astral's endgame. I think pyx is going to fix this at scale and they're going to abruptly become more troublesome to NVIDIA than George Hotz or GamersNexus.

replies(1): >>44986833 #
agos ◴[] No.44986833[source]
Dumb question from an outsider - why do you think this is so bad? Is it because so much of the ML adjacent code is written by people with background in academia and data science instead of software engineering? Or is it just Python being bad at this?
replies(2): >>44988574 #>>44988888 #
1. structural ◴[] No.44988888{5}[source]
1. If you want to advance the state of the art as quickly as possible (or have many, many experiments to run), being able to iterate quickly is the primary concern.

2. If you are publishing research, any time spent beyond what's necessary for the paper is a net loss, because you could be working on the next advancement instead of doing all the work of making the code more portable and reusable.

3. If you're trying to use that research in an internal effort, you'll take the next step to "make it work on my cloud", and any effort beyond that is also a net loss for your team.

4. If the amount of work necessary to prototype something that you can write a paper with is 1x, the amount of work necessary to scale that on one specific machine configuration is something like >= 10x, and the amount of work to make that a reusable library that can build and run anywhere is more like 100x.

So it really comes down to - who is going to do the majority of this work? How is it funded? How is it organized?

This can be compared to other human endeavours, too. Take the nuclear industry and its development as a parallel. The actual physics of nuclear fission is understood at this point and can be taught to lots of people. But to get from there to building a functional prototype reactor is a larger project (10x scale), and then scaling that to an entire powerplant complex that can be built in a variety of physical locations and run safely for decades is again orders of magnitude larger.