We have an imperative code flow where we perform a series of tasks that involve IO, and apply the effects sequentially. Here the biggest problem is holding a lock for a long transaction and starving the rest of the system. So we break it up into a finite state machine where the lock is held mostly during the synchronous parts.
The other is asking a lot of questions and then making a decision based on the sum of the answers. These actually happen in parallel, and we often have to relax the effective isolation levels to make this work. But it always seems to work better if the parallel task can be treated as a pure function. Purity removes side effects, which removes the need for write locks, which if applied consistently removes the Dining Philosopher’s problem. “Applied consistently” is the hard part. Because it requires not just personal discipline but team and organizational discipline.
> There is usually not much of a point in writing a finalizer that touches only the object being finalized, since such object updates wouldn’t normally be observable. Thus useful finalizers must touch global shared state.
That seems like an “Abandon hope, all ye who enter here.”
This is just about syntax sugar hiding function calls.
When I write async code I use a single-threaded multi-process pattern. Look ma'! No locks!
Well, that's not very fair. The best async code I've written was embarrassingly parallel, no-sync-needed, read-only stuff. If I was writing an RDBMS I would very much need locks, even if using the single-threaded/multi-processed pattern. But also then my finalizers would mainly drop locks rather than acquire them.
I'm not sure you can write a simple example in Python, because Rust's future's architecture and Python's is different. `futurelock` is an issue of cancellation safety which is a stranger concept (related to finalizers, but not in the way OP has described).
Personally, I think `tokio::select!` is dangerous and I don't use it my code - it's very easy to deadlock yourself or create weird performance issues. I think the interface is too close to Go and if you don't understand what is going on, you can create deadlocks. That said, even if you avoid `tokio::select!`, I think cancellation safety is one of those dragons that exist in async rust.
import threading
mutex = threading.Lock()
def gen_1():
yield 1
print("acquiring")
mutex.acquire();
print("acquired")
yield 2
print("releasing")
mutex.release()
yield 3
def gen_2():
yield "a"
def do_something_else():
print("im gonna do something else")
mutex.acquire()
print("acquired")
mutex.release()
print("done")
a = gen_1();
b = gen_2();
zipped_data = zip(a, b)
for num, letter in zipped_data:
print("output", num, letter)
do_something_else()
print("done")
Here you can see that `gen_1` "holds" the lock, even though we are done with it, and `gen_1` won't release it until `next` is called again.The problem is before `do_something_else` is called, either `a` must be destroyed or someone has to call `next` on it. However from just reading the code, the fact that this exists can be difficult to see.
def close(self):
self._closed = True
self.do_interesting_finalisation_stuff()
def __del__(self):
if not self._closed:
print("Programming error! Forgot to .close()", self)
If you do anything the slightest bit more interesting than that in your __del__, then you are likely to regret it.Every time I've written a __del__ that did more, it has been trouble and I've ended up whittling it down to a simple diagnostic. With one notable exception: A __del__ that put a termination notification into a queue.Queue which a different thread was listening to. That one worked great: If the other thread was still alive and listening, then it would get the message. If not, then the message would just get garbage-collected with the Queue, but message would be redundant anyway, so that would be fine.