Most active commenters
  • hinkley(3)

44 points emailed | 16 comments | | HN request time: 1.343s | source | bottom
1. aw1621107 ◴[] No.45904075[source]
Dupe of [0], though there's only 1 comment on that submission as of this comment.

[0]: https://news.ycombinator.com/item?id=45898923

2. hinkley ◴[] No.45904726[source]
We use async code in two modes which have different very different consequences for concurrency issues.

We have an imperative code flow where we perform a series of tasks that involve IO, and apply the effects sequentially. Here the biggest problem is holding a lock for a long transaction and starving the rest of the system. So we break it up into a finite state machine where the lock is held mostly during the synchronous parts.

The other is asking a lot of questions and then making a decision based on the sum of the answers. These actually happen in parallel, and we often have to relax the effective isolation levels to make this work. But it always seems to work better if the parallel task can be treated as a pure function. Purity removes side effects, which removes the need for write locks, which if applied consistently removes the Dining Philosopher’s problem. “Applied consistently” is the hard part. Because it requires not just personal discipline but team and organizational discipline.

> There is usually not much of a point in writing a finalizer that touches only the object being finalized, since such object updates wouldn’t normally be observable. Thus useful finalizers must touch global shared state.

That seems like an “Abandon hope, all ye who enter here.”

replies(1): >>45905849 #
3. wrcwill ◴[] No.45904737[source]
Unless I'm missing something, this has nothing to do with asynchronous code. The delete is just synchronous code running, same as if we called a function/closure right there.

This is just about syntax sugar hiding function calls.

replies(2): >>45905113 #>>45905158 #
4. hinkley ◴[] No.45905113[source]
I think it says if your async code holds locks you’re gonna have a bad time. Async and optimistic locks probably should go hand in hand.

I would think finalizers and async code magnify problems that are already there.

replies(1): >>45905887 #
5. ltratt ◴[] No.45905158[source]
I'm assuming you're referring to the Python finaliser example? If so, there's no syntax sugar hiding function calls to finalisers: you can verify that by running the code on PyPy, where the point at which the finaliser is called is different. Indeed, for this short-running program, the most likely outcome is that PyPy won't call the finaliser before the program completes!
6. cryptonector ◴[] No.45905849[source]
There's a reason Java got rid of finalizers. It forces the programmer to choose between synchronous cleanup (`AutoCloaseable`) or asynchronous cleanup on a thread (`Cleaner`).
7. cryptonector ◴[] No.45905887{3}[source]
If you use a single-threaded executor then you don't need locks in your async code. Well, you might use external locks, but not thread synchronization primitives.

When I write async code I use a single-threaded multi-process pattern. Look ma'! No locks!

Well, that's not very fair. The best async code I've written was embarrassingly parallel, no-sync-needed, read-only stuff. If I was writing an RDBMS I would very much need locks, even if using the single-threaded/multi-processed pattern. But also then my finalizers would mainly drop locks rather than acquire them.

replies(2): >>45906307 #>>45906736 #
8. nemothekid ◴[] No.45906103[source]
While I think the problem highlighted in the article is a longstanding problem for Rust[1], I don't think the example, or finalizers was the problem with Futurelock as described by Oxide.

I'm not sure you can write a simple example in Python, because Rust's future's architecture and Python's is different. `futurelock` is an issue of cancellation safety which is a stranger concept (related to finalizers, but not in the way OP has described).

Personally, I think `tokio::select!` is dangerous and I don't use it my code - it's very easy to deadlock yourself or create weird performance issues. I think the interface is too close to Go and if you don't understand what is going on, you can create deadlocks. That said, even if you avoid `tokio::select!`, I think cancellation safety is one of those dragons that exist in async rust.

[1] https://without.boats/blog/poll-drop/

replies(1): >>45906291 #
9. nemothekid ◴[] No.45906291[source]
The `futurelock` is probably closer to something like:

    import threading
    mutex = threading.Lock()

    def gen_1():
        yield 1
        print("acquiring")
        mutex.acquire();
        print("acquired")
        yield 2
        print("releasing")
        mutex.release()
        yield 3


    def gen_2():
        yield "a"

    def do_something_else():
        print("im gonna do something else")
        mutex.acquire()
        print("acquired")
        mutex.release()
        print("done")

    a = gen_1();
    b = gen_2();
    zipped_data = zip(a, b)
    for num, letter in zipped_data:
        print("output", num, letter)

    do_something_else()
    print("done")
Here you can see that `gen_1` "holds" the lock, even though we are done with it, and `gen_1` won't release it until `next` is called again.

The problem is before `do_something_else` is called, either `a` must be destroyed or someone has to call `next` on it. However from just reading the code, the fact that this exists can be difficult to see.

10. hinkley ◴[] No.45906307{4}[source]
You do have to be careful that all of your data updates are transitive, or you have to hold all of the updates until you can apply them in sequential order. One of my favorite tricks there is to use a throttling or limiting library, start all of the tasks, and then run a for loop to await each answer in order. You still have front-of-line issues but you can make as much forward progress as can be made.
11. keeganpoppen ◴[] No.45906699[source]
props to the author— this post is extremely well-written
12. keeganpoppen ◴[] No.45906736{4}[source]
that isn’t the panacea you describe it to be. you just happen to write a lot of code where writing it that way doesn’t result in consistency problems.
13. munch117 ◴[] No.45907087[source]
A __del__ that does any kind of real work is asking for trouble. Use it to print a diagnostic reminding you to call .close() or .join() or use a with statement, and nothing else. For example:

    def close(self):
        self._closed = True
        self.do_interesting_finalisation_stuff()
    def __del__(self):
        if not self._closed:
            print("Programming error! Forgot to .close()", self)
If you do anything the slightest bit more interesting than that in your __del__, then you are likely to regret it.

Every time I've written a __del__ that did more, it has been trouble and I've ended up whittling it down to a simple diagnostic. With one notable exception: A __del__ that put a termination notification into a queue.Queue which a different thread was listening to. That one worked great: If the other thread was still alive and listening, then it would get the message. If not, then the message would just get garbage-collected with the Queue, but message would be redundant anyway, so that would be fine.

replies(1): >>45907276 #
14. TinkersW ◴[] No.45907185[source]
The python example looks fixable with a reentrant mutex, no idea if that translates to the Rust issue.
15. anticodon ◴[] No.45907276[source]
Yep, a __del__ in the redis client code caused almost random deadlocks at my job for several years. Manual intervention was required to restart stuck Celery jobs. Took me about 2-3 weeks to find the culprit (had to deploy python interpreter compiled with debug info into production, wait for deadlock to happen again, attach with gdb and find where it happens). One of the most difficult production issues I had to solve in my life (because it happened randomly and it was impossible to even remotely guess what is causing it).