←back to thread

122 points phsilva | 2 comments | | HN request time: 0s | source
Show context
VWWHFSfQ ◴[] No.43111138[source]
Will Python ever get fast? Or even _reasonably_ fast?

The answer is no, it will not. Instead they'll just keep adding more and more syntax. And more and more ways to do the same old things. And they'll say that if you want "fast" then write a native module that we can import and use.

So then what's the point? Is Python really just a glue language like all the rest?

replies(4): >>43111179 #>>43111277 #>>43111282 #>>43111343 #
IgorPartola ◴[] No.43111179[source]
Python is fast enough for a whole set of problems AND it is a pretty, easy to read and write language. I do think it can probably hit pause on adding more syntax but at least everything it adds is backwards compatible. You won’t be writing a 3D FPS game engine in Python but you definitely can do a whole lot of real time data processing, batch processing, scientific computing, web and native applications, etc. before you need to start considering a faster interpreter.

If your only metric for a language is speed then nothing really beats hand crafted assembly. All this memory safety at runtime is just overhead. If you also consider language ergonomics, Python suddenly is not a bad choice at all.

replies(4): >>43111252 #>>43111698 #>>43111794 #>>43112435 #
1. VWWHFSfQ ◴[] No.43111252[source]
I guess I'm wondering what is the point of tail-call optimizations, or even async/await when it's all super slow and bounded by the runtime itself? There are basically no improvements whatsoever to the core cpython runtime. So really what is all this for? Some theoretical future version of Python that can actually use these features in an optimal way?
replies(1): >>43112072 #
2. throwaway81523 ◴[] No.43112072[source]
This TCO is in how the CPython interpreter works, not in making Python itself tail recursive. Some of the C code in the interpreter has been reorganized to put some calls into tail position where the C compiler turns them into jumps. That avoids some call/return overhead and makes the interpreter run a little faster. It's still interpreting the same language with the same semantics.