off topic, but i wonder why that phrase gets used rather than 10x which is much shorter.
- 10x is a meme
- what if it's 12x better
Order of magnitude faces less of that baggage, until it does :)
A metal wheel is still just a wheel. A faster package manager is still just a package manager.
My primary vehicle has off-road capable tires that offer as much grip as a road-only tire would have 20-25 years ago, thanks to technology allowing Michelin to reinvent what a dual-purpose tire can be!
In common conversation, the multiplier can vary from 2x - 10x. In context of some algorithms, order of magnitudes can be over the delta rather than absolutes. eg: an algorithms sees 1.1x improvement over the previous 10 years. A change that shows a 1.1x improvement by itself, overshadows an an order-of-magnitude more effort.
For salaries, I've used order-of-magnitude to mean 2x. Good way to show a step change in a person's perceived value in the market.
The good thing about reinventing the wheel is that you can get a round one.
https://scripting.wordpress.com/2006/12/20/scripting-news-fo...
The improvements came from lots of work from the entire python build system ecosystem and consensus building.
Rust's speed advantages typically come from one of a few places:
1. Fast start-up times, thanks to pre-compiled native binaries.
2. Large amounts of CPU-level concurrency with many fewer bugs. I'm willing to do ridiculous threading tricks in Rust I wouldn't dare try in C++.
3. Much lower levels of malloc/free in Rust compared to some high-level languages, especially if you're willing to work a little for it. Calling malloc in a multithreaded system is basically like watching the Millennium Falcon's hyperdrive fail. Also, Rust encourages abusing the stack to a ridiculous degree, which further reduces allocation. It's hard to "invisibly" call malloc in Rust, even compared to a language like C++.
4. For better or worse, Rust exposes a lot of the machinery behind memory layout and passing references. This means there's a permanent "Rust tax" where you ask yourself "Do I pass this by value or reference? Who owns this, and who just borrows is?" But the payoff for that work is good memory locality.
So if you put in a modest amount of effort, it's fairly easy to make Rust run surprisingly fast. It's not an absolute guarantee, and there are couple of traps for the unwary (like accidentally forgetting to buffer I/O, or benchmarking debug binaries).
Sure, other tools could handle the situation, but being baked into the tooling makes it much easier to bootstrap different configurations.
uv does the Python ecosystem better than any other tool, but it's still the standard Python ecosystem as defined in the relevant PEPs.
Conda rewrote their package resolver for similar reasons
tl;dw Rust, a fast SAT solver, micro-optimisation of key components, caching, and hardlinks/CoW.
Can you share more about this? What has changed between tires of 2005 and 2025?
https://www.caranddriver.com/features/a15078050/we-drive-the...
> In the last decade, the spiciest street-legal tires have nearly surpassed the performance of a decade-old racing tire, and computer modeling is a big part of the reason
(written about 8 years ago)
Even on a single core, this turns out to be simply false. It isn't that hard to either A: be doing enough actual computation that faster languages are in fact perceptibly faster, even, yes, in a web page handler or other such supposedly-blocked computation or B: without realizing it, have stacked up so many expensive abstractions on top of each other in your scripting language that you're multiplying the off-the-top 40x-ish slower with another set of multiplicative penalties that can take you into effectively arbitrarily-slower computations.
If you're never profiled a mature scripting language program, it's worth your time. Especially if nobody on your team has ever profiled it before. It can be an eye-opener.
Then it turns out that for historical path reasons, dynamic scripting languages are also really bad at multithreading and using multiple cores, and if you can write a program that can leverage that you can just blow away the dynamic scripting languages. It's not even hard... it pretty much just happens.
(I say historical path reasons because I don't think an inability to multithread is intrinsic to the dynamic scripting languages. It's just they all came out in an era when they could assume single core, it got ingrained into them for a couple of decades, and the reality is, it's never going to come fully out. I think someone could build a new dynamic language that threaded properly from the beginning, though.)
You really can see big gains just taking a dynamic scripting language program and turning it into a compiled language with no major changes to the algorithms. The 40x-ish penalty off the top is often in practice an underestimate, because that number is generally from highly optimized benchmarks in which the dynamic language implementation is highly tuned to avoid expensive operations; real code that takes advantage of all the conveniences and indirection and such can have even larger gaps.
This is not to say that dynamic scripting languages are bad. Performance is not the only thing that matters. They are quite obviously fast enough for a wide variety of tasks, by the strongest possible proof of that statement. That said, I think it is the case that there are a lot of programmers who have no idea how much performance they are losing in dynamic scripting languages, which can result in suboptimal engineering decisions. It is completely possible to replace a dynamic scripting language program with a compiled one and possibly see 100x+ performance improvements on very realistic code, before adding in multithreading. It is hard for that not to manifest in some sort of user experience improvement. My pitch here is not to give up dynamic scripting languages, but to have a more realistic view of the programming language landscape as a whole.
I don't know python but in JavaScript, triggering 1000 downloads in parallel is trivial. Decompressing them, like in python, is calling out to some native function. Decompressing them in parallel in JS would also be trivial (no idea about python). Writing them in parallel is also trivial.
What would a dynamic scripting language look like that wasn't subject to this limitation? Any examples? I don't know of contenders in this design space--- I am not up on it.
At this point XML is the backbone of many important technologies that many people won't use or won't use directly anymore.
This wasn't the case circa 2010, when I doubt any dev could have really avoided XML for a bunch of years.
I do like XML, though.
But because of the way cache coherency for shared, mutated memory works, parallel refcounting is slow as molasses and will always remain so.
I think Ruby has always used a tracing GC, but it also still has a GIL for some reason?
Hopefully this can disabuse others of similar mistaken memory.
1. they way get the metadata for a package.
packages are in zip files. zip files have their TOC at the end. So, instead of downloading the entire zip they just get the end of the file, read the TOC, then from that download just the metadata part
I've written that code before for my own projects.
2. They cache the results of packages unzipped and then link into your environment
This means there's no files being copied on the 2nd install. Just links.
Both of those are huge time wins that would be possible in any language.
3. They store their metadata as a memory dump
So, on loading there is nothing to parse.
Admittedly this is hard (impossible?) in many languages. Certainly not possible in Python and JavaScript. You could load binary data but it won't be useful without copying it into native numbers/strings/ints/floats/doubles etc...
I've done this in game engines to reduce load times in C/C++ and to save memory.
It'd be interesting to write some benchmarks for the first 2. The 3rd is a win but I suspect the first 2 are 95% of the speedup.
The comma rules introduce diff noise on unrelated lines.
This is the entire purpose of the standards.
> This is the entire purpose of the standards.
That seems to amount to saying that the purpose of the standards is to prevent progress and ensure that the mistakes of early Python project management tools are preserved forever. (Which would explain some things about the last ~25 years of Python project management I guess). The parts of uv that follow standards aren't the parts that people are excited about.
“Find the dependencies — and eliminate them.” When you're working on a really, really good team with great programmers, everybody else's code, frankly, is bug-infested garbage, and nobody else knows how to ship on time.
We had a similar attitude, although I'd say that we were a bit more humble. We didn't think that everyone else was producing garbage but, we also didn't assume that we couldn't produce something comparable to what we could buy for a tenth of the cost. From talking to folks at some competitors, there was a pretty big cultural difference between how we operated and how they operated. It simply didn't occur to them that they didn't have to buy into the standard American business logic that you should focus on your core competencies, that you can think through whether or not it makes sense to do something in-house on the merits of the particular thing instead of outsourcing your thinking to a pithy saying.[0]
I disagree. Had uv not followed these standards and instead gone off and done their completely own thing, it could not function as a drop in replacement for pip and venv and wouldn't have gotten anywhere near as much traction. I can use uv personally to work on projects that officially have to support pip and venv and have it all be transparent.
....
Unfortunately, there seems to be a problem here.
When reality and theory conflict, reality wins.
It sounds like you've drunk the same Kool-Aide I was referring to in my post. It's not true. When you're playing with 50x-100x slowdowns, if not more, it's really quite easy to run into user-perceptible slowdowns. A lot of engineers grotesquely underestimate how slow these languages are. I suspect it may be getting worse over time due to evaporative cooling, as engineers who do understand it also tend to have one reason or another to leave the language community at some point, and I believe (though I can not prove) that as a result the dynamic scripting language communities are actually getting worse and worse at realizing how slow their languages are. They're really quite slow.
I watched the video linked above on uv. They went over the optimizations. The big wins had nothing to do with rust and everything to do with design/algo choices.
You could have also done without the insults. You have no idea who I am and my experiences. I've shipped several AAA games written in C/C++ and assembly. I know how to optimize. I also know how dynamic languages work. I also know when people are making up bullshit about "it's fast because it's in rust!". No, that is not why it's fast.
Instead of "It's fast because it's in rust", I'd say: "It's fast because they chose to use rust for their python tool, which means they care a lot about speed."