Most active commenters

    93 points 0x1997 | 21 comments | | HN request time: 3.5s | source | bottom
    1. AceJohnny2 ◴[] No.45788117[source]
    Because I had to look it up:

    SPSC = Single Producer Single Consumer

    MPMC = Multiple Producer Multiple Consumer

    (I'll let you deduce the other abbreviations)

    replies(2): >>45789417 #>>45789913 #
    2. enricozb ◴[] No.45788158[source]
    When reading this project's wiki [0], it mentions that Kanal (another channel implementation) uses an optimization that "makes [the] async API not cancellation-safe". I wonder if this is the same / related issue to the recent HN thread on "future lock" [1]. I hadn't heard of this cancellation safety issue prior to that other HN thread.

    [0]: https://github.com/frostyplanet/crossfire-rs/wiki#kanal [1]: https://news.ycombinator.com/item?id=45774086

    replies(2): >>45788455 #>>45789568 #
    3. paholg ◴[] No.45788455[source]
    Futurelock is not about cancellation safety (cancellation is actually one solution to futurelock), though the related issues that are linked in that post are.
    4. truth_seeker ◴[] No.45788769[source]
    For Java/JVM :

    https://github.com/JCTools/JCTools

    5. Hamuko ◴[] No.45789274[source]
    I feel like testing if it'd be faster than tokio::sync::mpsc in my project, but in the context of a websocket client, the performance of just using tokio is already pretty good. Existing CPU usage is already negligible (under a minute of CPU time in >110 hours real time).
    6. endorphine ◴[] No.45789417[source]
    S = Single

    M = Multi

    ---

    C = Consumer

    P = Producer

    7. andrepd ◴[] No.45789568[source]
    Cancellation safety is another thing entirely, but one about which there's also an oxide RFD https://rfd.shared.oxide.computer/rfd/400
    8. efskap ◴[] No.45789798[source]
    Using stuff like this, does it make sense to use Rust in a golang-style where instead of async and its function colouring, you spawn coroutines and synchronize over channels?
    replies(2): >>45789835 #>>45791783 #
    9. thrance ◴[] No.45789835[source]
    It doesn't matter if you use channels or mutexes to communicate between tasks, you still need your function to be async to spawn it as a coroutine. Your only choice is between coroutines (async tasks spawned on an executor) or regular OS threads. Channels work with both, the rule of thumb is to use async when your workload is IO-bound, and threads when it is compute-bound. Then, it's up to you whether you communicate by sharing memory or share memory by communicating.
    replies(2): >>45790697 #>>45791456 #
    10. wartywhoa23 ◴[] No.45789913[source]
    They could have named that 1:1, 1:n, n:m, n:1, but deemed it too old and unfashionable.
    replies(1): >>45790606 #
    11. surajrmal ◴[] No.45790606{3}[source]
    Which side is producer and which is consumer? I don't think fashion has anything to do with it.
    replies(1): >>45793809 #
    12. surajrmal ◴[] No.45790697{3}[source]
    It does matter. Using channels makes control flow much more difficult to understand, but allows you to avoid wrapping everything in its own mutex (or RefCell) and local reasoning is easier to understand. There is also a difference in latency and cpu utilization, both of which still matter in io bound workloads. I honestly don't think it's one or the other but optimal usage is a mix of both depending on specifics of the use case. Channels are great for things that you want to be decoupled from each other but it needs to hit a certain level of abstraction/complexity before it's worth it.

    Even folks who write modern go try to avoid overusing channels. It's quite common to see go codebases with few or no channels.

    replies(1): >>45793079 #
    13. reitzensteinm ◴[] No.45790769[source]
    I'm a little nervous about the correctness of the memory orderings in this project, e.g.

    Two acquires back to back are unnecessary here. In general, fetch_sub and fetch_add should give enough guarantees for this file in Relaxed. https://github.com/frostyplanet/crossfire-rs/blob/master/src...

    Congest is never written to with release, so the Acquire is never used to form a release chain: https://github.com/frostyplanet/crossfire-rs/blob/dd4a646ca9...

    The queue appears to close the channel twice (once per rx/tx), which is discordant with the apparent care taken with the fencing. https://github.com/frostyplanet/crossfire-rs/blob/dd4a646ca9...

    The author also suggests an incorrect optimization to Tokio here which suggests a lack of understanding of what the specific guarantees given are: https://github.com/tokio-rs/tokio/pull/7622

    The tests do not appear to simulate the queue in Loom, which would be a very, very good idea.

    This stuff is hard. I almost certainly made a mistake in what I've written above (edit: I did!). In practice, the queue is probably fine to use, but I wouldn't be shocked if there's a heisenbug lurking in this codebase that manifests something like: it all works fine now, but in the next LLVM version an optimization pass is added which breaks it on ARM in release mode, and after that the queue yields duplicate values in a busy loop every few million reads which is only triggered on Graviton processors.

    Or something. Like I said, this stuff is hard. I wrote a very detailed simulator for the Rust/C++ memory model, have implemented dozens of lockless algorithms, and I still make a mistake every time I go to write code. You need to simulate it with something like Loom to have any hope of a robust implementation.

    For anyone interested in learning about Rust's memory model, I can't recommend enough Rust Atomics and Locks:

    https://marabos.nl/atomics/

    replies(1): >>45792211 #
    14. nicoburns ◴[] No.45791456{3}[source]
    > Your only choice is between coroutines (async tasks spawned on an executor) or regular OS threads.

    Thats not true. There are stackgul coroutine libraries in Rust too. I believe there's one called "may". They are admittedly not that widely used, but they are available.

    replies(1): >>45791939 #
    15. mamcx ◴[] No.45791783[source]
    If you are more like "parallel" than "async" totally yes!

    here "parallel" is used in the most broad sense where you have (probably unrelated) tasks that are mostly independent for each other and run to completion. In that case "async" is an anti-pattern. So if you work more process-based that switch-based go!

    16. steveklabnik ◴[] No.45791939{4}[source]
    May has unresolvable soundness issues, which is part of why it’s not popular.
    17. tontinton ◴[] No.45792205[source]
    Can I select multiple receivers concurrently, similar to a select in Linux?
    18. embedding-shape ◴[] No.45792211[source]
    > The tests do not appear to simulate the queue in Loom, which would be a very, very good idea.

    Loom is apparently this: https://github.com/tokio-rs/loom I've used tokio a bit in the past, but wasn't aware of that tool at all, looks really useful and probably I'm not alone in never hearing about it before. Any tips&tricks or gotchas with it one should know beforehand?

    19. thrance ◴[] No.45793079{4}[source]
    I meant, when it comes to choosing between threads or async/await, it doesn't matter if you use channels or something else. Both can be used with them. My original comment wasn't very clear it seems.

    Of course it matters what synchronization primitives you choose, for the reasons you gave.

    20. wartywhoa23 ◴[] No.45793809{4}[source]
    How do you read "X:Y": "X to Y" or "Y to X"? And how do you treat "to" - as direction from an origin to a destination or vice versa? As in "I send it to you".