[1] https://rustpython.github.io/pages/whats-left
[2] https://rustpython.github.io/pages/regression-tests-results....
2. Subject to the terms and conditions of this License Agreement, PSF hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of copyright, i.e., "Copyright (c) 2001-2024 Python Software Foundation; All Rights Reserved" are retained in Python alone or in any derivative version prepared by Licensee.
I get that this must be one aspect of the necessity of the GIL but I mean, C++ also has eager free behavior due to RAII and threads are working fine there, as long as you know what you're doing. Perhaps that's the rub though, it's pretty easy to crash/deadlock in C++ and we blame the programmer rather than the language.
Ridiculous accusation bordering on paranoid.
I feel the same way about Ruff, for example. One day it was "black all the things" and the next it's "btw we just reimplemented the entire Python formatting/linting ecosystem in Rust, and it's 100x faster, no biggie".
What's happening? Is it just so much easier to write stuff in Rust that projects like these pop out of people's heads, fully-formed? It boggles the mind.
Yes, RustPython has been in development since at least 2018.
> Wouldn't this be making waves much earlier in its development process?
It's been posted on HN several times before: https://hn.algolia.com/?q=rustpython
(3) is important because if it was written in Javascript or Java or Python or .NET or many other languages I'd have to learn something about the runtimes of those environment to get it working. If it was written in Python it would have to deal with the bootstrapping problem that it ought to have it's own Python installation separate from the one that it is manipulating so it can't have conflicts with that environment. (e.g. how many times have I busted my poetry?) I can use "uv" or "ruff" without learning anything about Rust!
As for (2) the speed of "uv" has as much to do with better algorithms and caching as it does with being in Rust and thus much faster than Python. I think you could have done better than Poetry in Python but "uv" is transformative in that it can often build an environment in seconds or less whereas with "poetry" or "pip" or "conda" I might have time to pound out a few posts on HN. I used to avoid creating new Python environments as much as possible but now it is fast, easy, and even fun.
I bet it is more work to write "uv" in rust as opposed to a similar tool in Python but the impact on the community is so huge because we can finally put problem (1) behind us and do it with speed, reliability and grace. I had notes on how to build a better python package management system and sometimes thought about trying it but I'd become convinced that the social problem of too many people finding half-baked tools like "pip" and "poetry" acceptable was intractable. Thanks to "uv" nobody will ever have to write one.
Every time I want to rewrite a shell function in python, I always hesitate due to the slow startup.
In Python, by contrast, all variables default to object references, and so nearly everything you do involves updating a refcount.
"Rye supports two systems to manage dependencies: uv and pip-tools. It currently defaults to uv"
I've been evaluating it lately and it has pretty much the same CLI commands as Poetry except it's faster and comes with complete Python interpreter management (which is to me the real killer feature as I don't really care about speed of dependency resolution, but I do care about the DX).
This is not different from the Python 2 days. Jython has always had subtly different semantics from Python (eg. it uses Java strings instead of Python ones, there's no C API, it relies on the Java GC so no eager free), so many common libraries wouldn't work with it. Just try to run NumPy on Jython - you can't, despite the same developer authoring both Jython and NumPy's predecessor.
It’s “I’m making a Python interpreter in rust,” claims emitted into the void with increasing engagement as it grows in usefulness.
Edit: and you can even see that in the HN search above. Every year it’s had a little more functionality and a little more engagement than the last.
$ time A=1 B=1 python -c "import os; print(int(os.getenv('A'))+int(os.getenv('B')))"
2
real 0m0.068s
user 0m0.029s
sys 0m0.026s
That said, my experience has been that adding business features in Rust apps is quite fast indeed!
Still, the maintainers stated that they don’t plan to implement Python’s readline module because they already have a rust implementation of readline. A similar argument could apply here - use native rust implementations of dependencies and expose them via the expected Python APIs. This would break some ambitious Python programs, but those probably wouldn’t consider alternative runtimes anyway.
Running it on hardened Linux, OpenBSD, or FreeBSD was a start. A Rust implementation might help.
I also miss setups like eCos RTOS where a GUI determined which features got compiled in. Strip each Python app down to just what it needs in the interpreter. Might squeeze it in L1-L2 cache that way, too. Aside from embedded (eg MicroPython), has anyone anything like that for use on servers?
If not, is it at all possible to get numpy to work and other libraries written in native code? I see that rustpython also work in wasm: but what about compiling numpy's native code to wasm as well?
So a fast SSD will help, and somewhat surprisingly putting it inside docker helps (in an HPC context, not so sure it’s implications here as we’re talking about a short scripts.)
But the context here is to port shell scripts to Python, I’m not sure how huge amounts of imports matters.
And it is probably an intrinsic problem of the language, unless we start talking about compiling (and somehow statically) not interpreting the Python program, whichever implementation of the language probably won’t help the situation.
Lastly, if high startup costs of the script becomes relevant, perhaps it is orchestrating wrong. This is an infamous problem of Julia, and their practice is then just keep the Julia instance alive and use it as “the shell”. Similarly, you can do so in Python. Ie rather than calling a script from the shell acting on millions of things, write a wrapper script that start the Python instance once. Memory leak could be a problem if it or its dependencies are not well written but even in that case you have ways to deal with that.
It's still a lot of work but the only need to make the "built in" parts of the language and that's a lot smaller subset.
Example of what im talking about: https://github.com/RustPython/RustPython/pull/3858
Never question the modern developers ability to import 1500 heavy libraries to accomplish something that only takes 10 lines of code.