Most active commenters
  • dotancohen(5)
  • zahlman(5)
  • bayindirh(3)
  • rowanG077(3)
  • rbanffy(3)
  • igouy(3)
  • rocqua(3)
  • monkeyelite(3)

←back to thread

255 points rbanffy | 88 comments | | HN request time: 1.58s | source | bottom
1. AlexanderDhoore ◴[] No.44003888[source]
Am I the only one who sort of fears the day when Python loses the GIL? I don't think Python developers know what they’re asking for. I don't really trust complex multithreaded code in any language. Python, with its dynamic nature, I trust least of all.
replies(19): >>44003924 #>>44003936 #>>44003940 #>>44003943 #>>44003945 #>>44003958 #>>44003971 #>>44004203 #>>44004251 #>>44004431 #>>44004501 #>>44005012 #>>44005100 #>>44005259 #>>44005773 #>>44006165 #>>44007388 #>>44011009 #>>44011917 #
2. DHolzer ◴[] No.44003924[source]
I was thinking that too. I am really not a professional developer though.

OFC it would be nice to just write python and everything would be 12x accelerated, but i don't see how there would not be any draw-backs that would interfere with what makes python so approachable.

3. NortySpock ◴[] No.44003936[source]
I hope at least the option remains to enable the GIL, because I don't trust me to write thread-safe code on the first few attempts.
4. txdv ◴[] No.44003940[source]
how does the the language being dynamic negatively affect the complexity of multithreading?
replies(4): >>44003967 #>>44005360 #>>44005981 #>>44006794 #
5. miohtama ◴[] No.44003943[source]
GIL or no-GIL concerns only people who want to run multicore workloads. If you are not already spending time threading or multiprocessing your code there is practically no change. Most race condition issues which you need to think are there regardless of GIL.
replies(3): >>44004241 #>>44005583 #>>44011886 #
6. quectophoton ◴[] No.44003945[source]
I don't want to add more to your fears, but also remember that LLMs have been trained on decades worth of Python code that assumes the presence of the GIL.
replies(1): >>44006677 #
7. dotancohen ◴[] No.44003958[source]
As a Python dabbler, what should I be reading to ensure my multi-threaded code in Python is in fact safe.
replies(2): >>44004045 #>>44004577 #
8. nottorp ◴[] No.44003967[source]
Is there so much legacy python multithreaded code anyway?

Considering everyone knew about the GIL, I'm thinking most people just wouldn't bother.

replies(1): >>44004034 #
9. bayindirh ◴[] No.44003971[source]
More realistically, as it happened in ML/AI scene, the knowledgeable people will write the complex libraries and will hand these down to scientists and other less experienced, or risk-averse developers (which is not a bad thing).

With the critical mass Python acquired over the years, GIL becomes a very sore bottleneck in some cases. This is why I decided to learn Go, for example. Properly threaded (and green threaded) programming language which is higher level than C/C++, but lower than Python which allows me to do things which I can't do with Python. Compilation is another reason, but it was secondary with respect to threading.

10. toxik ◴[] No.44004034{3}[source]
There is, and what's worse, it assumes a global lock will keep things synchronized.
replies(1): >>44004133 #
11. cess11 ◴[] No.44004045[source]
The literature on distributed systems is huge. It depends a lot on your use case what you ought to do. If you're lucky you can avoid shared state, as in no race conditions in either end of your executions.

https://www.youtube.com/watch?v=_9B__0S21y8 is fairly concise and gives some recommendations for literature and techniques, obviously making an effort in promoting PlusCal/TLA+ along the way but showcases how even apparently simple algorithms can be problematic as well as how deep analysis has to go to get you a guarantee that the execution will be bug free.

replies(1): >>44004178 #
12. rowanG077 ◴[] No.44004133{4}[source]
Does it? The GIL only ensured each interpreter instruction is atomic. But any group of instruction is not protected. This makes it very hard to rely on the GIL for synchronization unless you really know what you are doing.
replies(1): >>44004265 #
13. dotancohen ◴[] No.44004178{3}[source]
My current concern is a CRUD interface that transcribes audio in the background. The transcription is triggered by user action. I need the "transcription" field disabled until the transcript is complete and stored in the database, then allow the user to edit the transcription in the UI.

Of course, while the transcription is in action the rest of the UI (Qt via Pyside) should remain usable. And multiple transcription requests should be supported - I'm thinking of a pool of transcription threads, but I'm uncertain how many to allocate. Half the quantity of CPUs? All the CPUs under 50% load?

Advise welcome!

replies(2): >>44004418 #>>44004966 #
14. ◴[] No.44004203[source]
15. immibis ◴[] No.44004241[source]
With the GIL, multithreaded Python gives concurrent I/O without worrying about data structure concurrency (unless you do I/O in the middle of it) - it's a lot like async in this way - data structure manipulation is atomic between "await" expressions (except in the "await" is implicit and you might have written one without realizing in which case you have a bug). Meanwhile you still get to use threads to handle several concurrent I/O operations. I bet a lot of Python code is written this way and will start randomly crashing if the data manipulation becomes non-atomic.
replies(3): >>44004284 #>>44005054 #>>44005728 #
16. jillesvangurp ◴[] No.44004251[source]
You are not the only one who is afraid of changes and a bit change resistant. I think the issue here is that the reasons for this fear are not very rational. And also the interest of the wider community is to deal with technical debt. And the GIL is pure technical debt. Defensible 30 years ago, a bit awkward 20 years ago, and downright annoying and embarrassing now that world + dog does all their AI data processing with python at scale for the last 10. It had to go in the interest of future proofing the platform.

What changes for you? Nothing unless you start using threads. You probably weren't using threads anyway because there is little to no point in python to using them. Most python code bases completely ignore the threading module and instead use non blocking IO, async, or similar things. The GIL thing only kicks in if you actually use threads.

If you don't use threads, removing the GIL changes nothing. There's no code that will break. All those C libraries that aren't thread safe are still single threaded, etc. Only if you now start using threads do you need to pay attention.

There's some threaded python code of course that people may have written in python somewhat naively in the hope that it would make things faster that is constantly hitting the GIL and is effectively single threaded. That code now might run a little faster. And probably with more bugs because naive threaded code tends to have those.

But a simple solution to address your fears: simply don't use threads. You'll be fine.

Or learn how to use threads. Because now you finally can and it isn't that hard if you have the right abstractions. I'm sure those will follow in future releases. Structured concurrency is probably high on the agenda of some people in the community.

replies(4): >>44004471 #>>44004545 #>>44005797 #>>44005830 #
17. immibis ◴[] No.44004265{5}[source]
AFAIK a group of instructions is only non-protected if one of the instructions does I/O. Explicit I/O - page faults don't count.
replies(1): >>44004683 #
18. rowanG077 ◴[] No.44004284{3}[source]
Afaik the only guarantee there is, is that a bytecode instruction is atomic. Built in data structures are mostly safe I think on a per operation level. But combining them is not. I think by default every few millisecond the interpreter checks for other threads to run even if there is no IO or async actions. See `sys.getswitchinterval()`
replies(2): >>44004571 #>>44005901 #
19. realreality ◴[] No.44004418{4}[source]
Use `concurrent.futures.ThreadPoolExecutor` to submit jobs, and `Future.add_done_callback` to flip the transcription field when the job completes.
replies(2): >>44008065 #>>44010314 #
20. zem ◴[] No.44004431[source]
this looks extremely promising https://microsoft.github.io/verona/pyrona.html
21. ◴[] No.44004471[source]
22. freeone3000 ◴[] No.44004501[source]
I'm sure you'll be happy using the last language that has to fork() in order to thread. We've only had consumer-level multicore processors for 20 years, after all.
replies(2): >>44005842 #>>44011887 #
23. HDThoreaun ◴[] No.44004545[source]
> But a simple solution to address your fears: simply don't use threads. You'll be fine.

Im not worried about new code. Im worried about stuff written 15 years ago by a monkey who had no idea how threads work and just read something on stack overflow that said to use threading. This code will likely break when run post-GIL. I suspect there is actually quite a bit of it.

replies(5): >>44004632 #>>44004665 #>>44004939 #>>44008198 #>>44010469 #
24. hamandcheese ◴[] No.44004571{4}[source]
This is the nugget of information I was hoping for. So indeed even GIL threaded code today can suffer from concurrency bugs (more so than many people here seem to think).
25. HDThoreaun ◴[] No.44004577[source]
Honestly unless youre willing to devote a solid 4+ hours to learning about multi threading stick with ayncio
replies(1): >>44010312 #
26. bgwalter ◴[] No.44004632{3}[source]
If it is C-API code: Implicit protection of global variables by the GIL is a documented feature, which makes writing extensions much easier.

Most C extensions that will break are not written by monkeys, but by conscientious developers that followed best practices.

27. bayindirh ◴[] No.44004665{3}[source]
Software rots, software tools evolve. When Intel released performance primitives libraries which required recompilation to analyze multi-threaded libraries, we were amazed. Now, these tools are built into processors as performance counters and we have way more advanced tools to analyze how systems behave.

Older code will break, but they break all the time. A language changes how something behaves in a new revision, suddenly 20 year old bedrock tools are getting massively patched to accommodate both new and old behavior.

Is it painful, ugly, unpleasant? Yes, yes and yes. However change is inevitable, because some of the behavior was rooted in inability to do some things with current technology, and as hurdles are cleared, we change how things work.

My father's friend told me that length of a variable's name used to affect compile/link times. Now we can test whether we have memory leaks in Rust. That thing was impossible 15 years ago due to performance of the processors.

replies(4): >>44005661 #>>44005802 #>>44007054 #>>44010622 #
28. kfrane ◴[] No.44004683{6}[source]
If I understand that correctly, it would mean that running a function like this on two threads f(1) and f(2) would produce a list of 1 and 2 without interleaving.

  def f(x):
      for _ in range(N):
          l.append(x)
I've tried it out and they start interleaving when N is set to 1000000.
29. actinium226 ◴[] No.44004939{3}[source]
If code has been unmaintained for more than a few years, it's usually such a hassle to get it working again that 99% of the time I'll just write my own solution, and that's without threads.

I feel some trepidation about threads, but at least for debugging purposes there's only one process to attach to.

30. sgarland ◴[] No.44004966{4}[source]
Just use multiprocessing. If each job is independent and you aren’t trying to spread it out over multiple workers, it seems much easier and less risky to spawn a worker for each job.

Use SharedMemory to pass the data back and forth.

31. bratao ◴[] No.44005012[source]
This is a common mistake and very badly communicated. The GIL do not make the Python code thread-safe. It only protect the internal CPython state. Multi-threaded Python code is not thread-safe today.
replies(3): >>44005107 #>>44005204 #>>44007726 #
32. imtringued ◴[] No.44005054{3}[source]
You start talking about GIL and then you talk about non-atomic data manipulation, which happen to be completely different things.

The only code that is going to break because of "No GIL" are C extensions and for very obvious reasons: You can now call into C code from multiple threads, which wasn't possible before, but is now. Python code could always be called from multiple python threads even in the presence of the GIL in python.

33. tialaramex ◴[] No.44005100[source]
You're not the only one. David Baron's note certainly applies: https://bholley.net/blog/2015/must-be-this-tall-to-write-mul...

In a language conceived for this kind of work it's not as easy as you'd like. In most languages you're going to write nonsense which has no coherent meaning whatsoever. Experiments show that humans can't successfully understand non-trivial programs unless they exhibit Sequential Consistency - that is, they can be understood as if (which is not reality) all the things which happen do happen in some particular order. This is not the reality of how the machine works, for subtle reasons, but without it merely human programmers are like "Eh, no idea, I guess everything is computer?". It's really easy to write concurrent programs which do not satisfy this requirement in most of these languages, you just can't debug them or reason about what they do - a disaster.

As I understand it Python without the GIL will enable more programs that lose SC.

34. amelius ◴[] No.44005107[source]
Well, I think you can manipulate a dict from two different threads in Python, today, without any risk of segfaults.
replies(2): >>44005807 #>>44011904 #
35. porridgeraisin ◴[] No.44005204[source]
Internal cpython state also includes say, a dictionary's internal state. So for practical purposes it is safe. Of course, TOCTOU, stale reads and various race conditions are not (and can never be) protected by the GIL.
36. qznc ◴[] No.44005259[source]
Worst case is probably that it is like a "Python4": Things break when people try to update to non-GIL, so they rather stay with the old version for decades.
37. breadwinner ◴[] No.44005360[source]
When the language is dynamic there is less rigor. Statically checked code is more likely to be correct. When you add threads to "fast and loose" code things get really bad.
replies(1): >>44005849 #
38. fulafel ◴[] No.44005583[source]
A lot of Python usage is leveraging libraries with parallel kernels inside written in other languages. A subset of those is bottlenecked on Python side speed. A sub-subset of those are people who want to try no-GIL to address the bottleneck. But if non-GIL becomes pervasive, it could mean Python becomes less safe for the "just parallel kernels" users.
replies(1): >>44005769 #
39. delusional ◴[] No.44005661{4}[source]
> Software rots

No it does not. I hate that analogy so much because it leads to such bad behavior. Software is a digital artifact that can does not degrade. With the right attitude, you'd be able to execute the same binary on new machines for as long as you desired. That is not true of organic matter that actually rots.

The only reason we need to change software is that we trade that off against something else. Instructions are reworked, because chasing the universal Turing machine takes a few sacrifices. If all software has to run on the same hardware, those two artifacts have to have a dialogue about what they need from each other.

If we didnt want the universal machine to do anything new. If we had a valuable product. We could just keep making the machine that executes that product. It never rots.

replies(6): >>44005751 #>>44005771 #>>44005775 #>>44006313 #>>44006656 #>>44010640 #
40. OskarS ◴[] No.44005728{3}[source]
That doesn't match with my understanding of free-threaded Python. The GIL is being replaced with fine-grained locking on the objects themselves, so sharing data-structures between threads is still going to work just fine. If you're talking about concurrency issues like this causing out-of-bounds errors:

    if len(my_list) > 5:
        print(my_list[5])
(i.e. because a different thread can pop from the list in-between the check and the print), that could just as easily happen today. The GIL makes sure that only one python interpreter runs at once, but it's entirely possible that the GIL is released and switches to a different thread after the check but before the print, so there's no extra thread-safety issue in free-threaded mode.

The problems (as I understand it, happy to be corrected), are mostly two-fold: performance and ecosystem. Using fine-grained locking is potentially much less efficient than using the GIL in the single-threaded case (you have to take and release many more locks, and reference count updates have to be atomic), and many, many C extensions are written under the assumption that the GIL exists.

41. kstrauser ◴[] No.44005751{5}[source]
That’s not what the phrase implies. If you have a C program from 1982, you can still compile it on a 1982 operating system and toolchain and it’ll work just as before.

But if you tried to compile it on today’s libc, making today’s syscalls… good luck with that.

Software “rots” in the sense that it has to be updated to run on today’s systems. They’re a moving target. You can still run HyperCard on an emulator, but good luck running it unmodded on a Mac you buy today.

replies(1): >>44010649 #
42. kccqzy ◴[] No.44005769{3}[source]
Yes sure. Thought experiment: what happens when these parallel kernels suddenly need to call back in to Python? Let's say you have a multithreaded sorting library. If you are sorting numbers then fine nothing changes. But if you are sorting objects you need to use a single thread because you need to call PyObject_RichCompare. These new parallel kernels will then try to call PyObject_RichCompare from multiple threads.
43. dahcryn ◴[] No.44005771{5}[source]
yes it does.

If software is implicitly built on wrong understanding, or undefined behaviour, I consider it rotting when it starts to fall apart as those undefined behaviours get defined. We do not need to sacrifice a stable future because of a few 15 year old programs. Let the people who care about the value that those programs bring, manage the update cycle and fix it.

44. odiroot ◴[] No.44005773[source]
It's called job security. We'll be rewriting decades of code that's broken by that transition.
45. eblume ◴[] No.44005775{5}[source]
Software is written with a context, and the context degrades. It must be renewed. It rots, sorry.
replies(1): >>44006626 #
46. dkarl ◴[] No.44005797[source]
> What changes for you? Nothing unless you start using threads

Coming from the Java world, you don't know what you're missing. Looking inside an application and seeing a bunch of threadpools managed by competing frameworks, debugging timeouts and discovering that tasks are waiting more than a second to get scheduled on the wrong threadpool, tearing your hair out because someone split a tiny sub-10μs bit of computation into two tasks and scheduling the second takes a hundred times longer than the actual work done, adding a library for a trivial bit of functionality and discovering that it spins up yet another threadpool when you initialize it.

(I'm mostly being tongue in cheek here because I know it's nice to have threading when you need it.)

47. cestith ◴[] No.44005802{4}[source]
My only concern is this kind of change in semantics for existing syntax is more worthy of a major revision than a point release.
replies(2): >>44009055 #>>44012014 #
48. pansa2 ◴[] No.44005807{3}[source]
You can do so in free-threaded Python too, right? The dict is still protected by a lock, but one that’s much more fine-grained than the GIL.
replies(1): >>44005911 #
49. rbanffy ◴[] No.44005830[source]
> There's some threaded python code of course

A fairly common pattern for me is to start a terminal UI updating thread that redraws the UI every second or so while one or more background threads do their thing. Sometimes, it’s easier to express something with threads and we do it not to make the process faster (we kind of accept it will be a bit slower).

The real enemy is state that can me mutated from more than one place. As long as you know who can change what, threads are not that scary.

50. im3w1l ◴[] No.44005842[source]
You have to understand that people come from very different angles with python. Some people write web servers where in python, where speed equals money saved. Other people write little UI apps that where speed is a complete non-issue. Yet others write aiml code that spends most of its time in gpu code. But then they want to do just a little data massaging in python which can easily bottleneck the whole thing. And some people people write scripts that don't use a .env but rather os-libraries.
51. jaoane ◴[] No.44005849{3}[source]
Unless your claim is that the same error can happen more times per minute because threading can execute more code in the same timespan, this makes no sense.
replies(1): >>44007326 #
52. ynik ◴[] No.44005901{4}[source]
Bytecode instructions have never been atomic in Python's past. It was always possible for the GIL to be temporarily released, then reacquired, in the middle of operations implemented in C. This happens because C code is often manipulating the reference count of Python objects, e.g. via the `Py_DECREF` macro. But when a reference count reaches 0, this might run a `__del__` function implemented in Python, which means the "between bytecode instructions" thread switch can happen inside that reference-counting-operation. That's a lot of possible places!

Even more fun: allocating memory could trigger Python's garbage collector which would also run `__del_-` functions. So every allocation was also a possible (but rare) thread switch.

The GIL was only ever intended to protect Python's internal state (esp. the reference counts themselves); any extension modules assuming that their own state would also be protected were likely already mistaken.

replies(1): >>44006404 #
53. amelius ◴[] No.44005911{4}[source]
Sounds good, yes.
54. jerf ◴[] No.44005981[source]
I have a hypothesis that being dynamic has no particular effect on the complexity of multithreading. I think the apparent effect is a combination of two things: 1. All our dynamic scripting languages in modern use date from the 1990s before this degree of threading was a concern for the languages and 2. It is really hard to retrofit code written for not being threaded to work in a threaded context, and the "deeper" the code in the system the harder it is. Something like CPython is about as "deep" as you can go, so it's really, really hard.

I think if someone set out to write a new dynamic scripting language today, from scratch, that multithreading it would not pose any particular challenge. Beyond that fact that it's naturally a difficult problem, I mean, but nothing special compared to the many other languages that have implemented threading. It's all about all that code from before the threading era that's the problem, not the threading itself. And Python has a loooot of that code.

55. almostgotcaught ◴[] No.44006165[source]
Do you understand what you're implying?

"Python programmers are so incompetent that Python succeeds as a language only because it lacks features they wouldn't know to use"

Even if it's circumstantially true, doesn't mean it's the right guiding principle for the design of the language.

56. indymike ◴[] No.44006313{5}[source]
>> Software rots > No it does not.

I'm thankful that it does, or I would have been out of work long ago. It's not that the files change (literal rot), it is that hardware, OSes, libraries, and everything else changes. I'm also thankful that we have not stopped innovating on all of the things the software I write depends on. You know, another thing changes - what we are using the software for. The accounting software I wrote in the late 80s... would produce financial reports that were what was expected then, but would not meet modern GAAP requirements.

57. rowanG077 ◴[] No.44006404{5}[source]
Well I didn't think of this myself. It's literally what the python official doc says:

> A global interpreter lock (GIL) is used internally to ensure that only one thread runs in the Python VM at a time. In general, Python offers to switch among threads only between bytecode instructions; how frequently it switches can be set via sys.setswitchinterval(). Each bytecode instruction and therefore all the C implementation code reached from each instruction is therefore atomic from the point of view of a Python program.

https://docs.python.org/3/faq/library.html#what-kinds-of-glo...

If this is not the case please let the official python team know their documentation is wrong. It indeed does state that if Py_DECREF is invoked the bets are off. But a ton of operations never do that.

58. igouy ◴[] No.44006626{6}[source]
You said it's the context that rots.
replies(1): >>44006736 #
59. rocqua ◴[] No.44006656{5}[source]
Fair point, but there is an interesting question posed.

Software doesn't rot, it remains constant. But the context around it changes, which means it loses usefulness slowly as time passes.

What is the name for this? You could say 'software becomes anachronistic'. But is there a good verb for that? It certainly seems like something that a lot more than just software experiences. Plenty of real world things that have been perfectly preserved are now much less useful because the context changed. Consider an Oxen-yoke, typewriters, horse-drawn carriages, envelopes, phone switchboards, etc.

It really feels like this concept should have a verb.

replies(1): >>44008334 #
60. rocqua ◴[] No.44006677[source]
This could, indeed, be quite catastrophic.

I wonder if companies will start adding this to their system prompts.

replies(1): >>44010658 #
61. bayindirh ◴[] No.44006736{7}[source]
It's a matter of perspective, I guess...

When you look from the program's perspective, the context changes and becomes unrecognizable, IOW, it rots.

When you look from the context's perspective, the program changes by not evolving and keeping up with the context, IOW, it rots.

Maybe we anthropomorphize both and say "they grow apart". :)

replies(1): >>44008270 #
62. rocqua ◴[] No.44006794[source]
Dynamic(ally typed) languages, by virtue of not requiring strict typing, often lead to more complicated function signatures. Such functions are generally harder to reason about. Because they tend to require inspection of the function to see what is really going on.

Multithreaded code is incredibly hard to reason about. And reasoning about it becomes a lot easier if you have certain guarantees (e.g. this argument / return value always has this type, so I can always do this to it). Code written in dynamic languages will more often lack such guarantees, because of the complicated signatures. This makes it even harder to reason about Multithreaded code, increasing the risk posed by multithreaded code.

63. spookie ◴[] No.44007054{4}[source]
The other day I compiled a 1989 C program and it did the job.

I wish more things were like that. Tired of building things on shaky grounds.

replies(1): >>44009051 #
64. breadwinner ◴[] No.44007326{4}[source]
Some statically checked languages and tools can catch potential data races at compile time. Example: Rust's ownership and borrowing system enforces thread safety at compile time. Statically typed functional languages like Haskell or OCaml encourage immutability, which reduces shared mutable state — a common source of concurrency bugs. Statically typed code can enforce usage of thread-safe constructs via types (e.g., Sync/Send in Rust or ConcurrentHashMap in Java).
65. frollogaston ◴[] No.44007388[source]
What reliance did you have in mind? All sorts of calls in Python can release the GIL, so you already need locking, and there are race conditions just like in most languages. It's not like JS where your code is guaranteed to run in order until you "await" something.

I don't fully understand the challenge with removing it, but thought it was something about C extensions, not something most users have to directly worry about.

66. kevingadd ◴[] No.44007726[source]
This should not have been downvoted. It's true that the GIL does not make python code thread-safe implicitly, you have to either construct your code carefully to be atomic (based on knowledge of how the GIL works) or make use of mutexes, semaphores, etc. It's just memory-safe and can still have races etc.
67. ptx ◴[] No.44008065{5}[source]
Although keep in mind that the callback will be "called in a thread belonging to the process" (say the docs), presumably some thread that is not the UI thread. So the callback needs to post an event to the UI thread's event queue, where it can be picked up by the UI thread's event loop and only then perform the UI updates.

I don't know how that's done in Pyside, though. I couldn't find a clear example. You might have to use a QThread instead to handle it.

replies(1): >>44010363 #
68. dhruvrajvanshi ◴[] No.44008198{3}[source]
> Im not worried about new code. Im worried about stuff written 15 years ago by a monkey who had no idea how threads work and just read something on stack overflow that said to use threading. This code will likely break when run post-GIL. I suspect there is actually quite a bit of it.

I was with OP's point but then you lost me. You'll always have to deal with that coworker's shitty code, GIL or not.

Could they make a worse mess with multi threading? Sure. Is their single threaded code as bad anyway because at the end of the day, you can't even begin understand it? Absolutely.

But yeah I think python people don't know what they're asking for. They think GIL less python is gonna give everyone free puppies.

69. igouy ◴[] No.44008270{8}[source]
We say the context has breaking changes.

We say the context is not backwards compatible.

70. igouy ◴[] No.44008334{6}[source]
obsolescence
71. rbanffy ◴[] No.44009051{5}[source]
If you go into mainframes, you'll compile code that was written 50 years ago without issue. In fact, you'll run code that was compiled 50 years ago and all that'll happen is that it'll finish much sooner than it did on the old 360 it originally ran on.
72. rbanffy ◴[] No.44009055{5}[source]
It's opt-in at the moment. It won't be the default behavior for a couple releases.

Maybe we'll get Python 4 with no GIL.

/me ducks

73. dotancohen ◴[] No.44010312{3}[source]
I'm willing to invest an afternoon learning. That's been the premise of my entire career!
74. dotancohen ◴[] No.44010314{5}[source]
Thank you.
75. dotancohen ◴[] No.44010363{6}[source]
Thank you. Perhaps I should trigger the transcription thread from the UI thread, then? It is a UI button that initiates it after all.
replies(1): >>44010633 #
76. zahlman ◴[] No.44010469{3}[source]
>Im worried about stuff written 15 years ago

Please don't - it isn't relevant.

15 years ago, new Python code was still dominantly for 2.x. Even code written back then with an eye towards 3.x compatibility (or, more realistically, lazily run through `2to3` or `six`) will have quite little chance of running acceptably on 3.14 regardless. There have been considerable removals from the standard library, `async` is no longer a valid identifier name (you laugh, but that broke Tensorflow once). The attitude taken towards """strings""" in a lot of 2.x code results in constructs that can be automatically made into valid syntax that appears to preserve the original intent, but which are not at all automatically fixed.

Also, the modern expectation is of a lock-step release cadence. CPython only supports up to the last 5 versions, released annually; and whenever anyone publishes a new version of a package, generally they'll see no point in supporting unsupported Python versions. Nor is anyone who released a package in the 3.8 era going to patch it if it breaks in 3.14 - because support for 3.14 was never advertised anyway. In fact, in most cases, support for 3.9 wasn't originally advertised, and you can't update the metadata for an existing package upload (you have to make a new one, even if it's just a "post-release") even if you test it and it does work.

Practically speaking, pure-Python packages usually do work in the next version, and in the next several versions, perhaps beyond the support window. But you can really never predict what's going to break. You can only offer a new version when you find out that it's going to break - and a lot of developers are going to just roll that fix into the feature development they were doing anyway, because life's too short to backport everything for everyone. (If there's no longer active development and only maintenance, well, good luck to everyone involved.)

If 5 years isn't long enough for your purposes, practically speaking you need to maintain an environment with an outdated interpreter, and find a third party (RedHat seems to be a popular choice here) to maintain it.

77. zahlman ◴[] No.44010622{4}[source]
> A language changes how something behaves in a new revision, suddenly 20 year old bedrock tools are getting massively patched to accommodate both new and old behavior.

In my estimation, the only "20 year old bedrock tools" in Python are in the standard library - which currently holds itself free to deprecate entire modules in any minor version, and remove them two minor versions later - note that this is a pseudo-calver created by a coincidentally annual release cadence. (A bunch of stuff that old was taken out recently, but it can't really be considered "bedrock" - see https://peps.python.org/pep-0594/).

Unless you include NumPy's predecessors when dating it (https://en.wikipedia.org/wiki/NumPy#History). And the latest versions of NumPy don't even support Python 3.9 which is still not EOL.

Requests turns 15 next February (https://pypi.org/project/requests/#history).

Pip isn't 20 years old yet (https://pypi.org/project/pip/#history) even counting the version 0.1 "pyinstall" prototype (not shown).

Setuptools (which generally supports only the Python versions supported by CPython, hasn't supported Python 2.x since version 45 and is currently on version 80) only appears to go back to 2006, although I can't find release dates for versions before what's on PyPI (their own changelog goes back to 0.3a1, but without dates).

78. ptx ◴[] No.44010633{7}[source]
The tricky part is coming back onto the UI thread when the background work finishes. Your transcription thread has to somehow trigger the UI work to be done on the UI thread.

It seems the way to do it in Qt is with signals and slots, emitting a signal from your QThread and binding it to a slot in the UI thread, making sure to specify a "queued connection" [1]. There's also a lower-level postEvent method [2] but people disagree [3] on whether that's OK to call from a regular Python thread or has to be called from a QThread.

So I would try doing it with Qt's thread classes, not with concurrent.futures.

[1] https://doc.qt.io/qt-5/threads-synchronizing.html#high-level...

[2] https://doc.qt.io/qt-6/qcoreapplication.html#postEvent

[3] https://www.mail-archive.com/pyqt@riverbankcomputing.com/msg...

79. zahlman ◴[] No.44010640{5}[source]
>execute the same binary

Only if you statically compile or don't upgrade your dependencies. Or don't allow your dependencies to innovate.

80. zahlman ◴[] No.44010649{6}[source]
> You can still run HyperCard on an emulator, but good luck running it unmodded on a Mac you buy today.

I grew up with HyperCard, so I had a moment of sadness here.

replies(1): >>44011007 #
81. zahlman ◴[] No.44010658{3}[source]
Suppose they do. How is the LLM supposed to build a model of what will or won't break without a GIL purely from a textual analysis?

Especially when they've already been force-fed with ungodly amounts of buggy threaded code that has been mistakenly advertised as bug-free simply because nobody managed to catch the problem with a fuzzer yet (and which is more likely to expose its faults in a no-GIL environment, even though it's still fundamentally broken with a GIL)?

82. kstrauser ◴[] No.44011007{7}[source]
We all have our own personal HyperCard.
83. seabrookmx ◴[] No.44011009[source]
While it certainly has its rough edges, I'm a big asyncio user. So I'll be over here happily writing concurrent python that's single threaded, ie. pretending my Python is nodejs.

For the web/network workloads most of us write, I'd highly recommend this.

84. monkeyelite ◴[] No.44011886[source]
When you launch processes to do work you get multi-core workload balancing for free.
85. monkeyelite ◴[] No.44011887[source]
I don’t understand this argument. My python program isn’t the only program on the system - I have a database, web server, etc. It’s already multi-core.
86. spacechild1 ◴[] No.44011904{3}[source]
It's memory safe, but it's not necessarily free of race conditions! It's not only C extensions that release the GIL, the Python interpreter itself releases the GIL after a certain number of instructions so that other threads can make progress. See https://docs.python.org/3/library/sys.html#sys.getswitchinte....

Certain operations that look atomic to the user are actually comprised of multiple bytecode instructions. Now, if you are unlucky, the interpreter decides to release the GIL and yield to another thread exactly during such instructions. You won't get a segfault, but you might get unexpected results.

See also https://github.com/google/styleguide/blob/91d6e367e384b0d8aa...

87. monkeyelite ◴[] No.44011917[source]
Good engineering design is about making unbalanced tradeoffs where you get huge wins for low costs. These kinds of decisions are opinionated and require you to say no to some edge cases to get a lot back on the important cases.

One lesson I have learned is that good design cannot survive popularity and bureaucracy that comes with it. Over time people just beat down your door with requests to do cases you explicitly avoided. You’re blocking their work and not being pragmatic! Eventually nobody is left to advocate for them.

And part of that is the community has more resources and can absorb some more complexity. But this is also why I prefer tools with smaller communities.

88. necovek ◴[] No.44012014{5}[source]
Python already has a history of "misrepresenting" the ycope of the change (like changing behaviour of one of core data types and calling it just a major version change — that's really a new language IMHO).

Still, that's only a marketing move, technically the choice was still the right one, just like this one is.