←back to thread

Multi-Core by Default

(www.rfleury.com)
70 points kruuuder | 3 comments | | HN request time: 1.574s | source
1. MontyCarloHall ◴[] No.45538212[source]
This post underscores how traditional imperative language syntax just isn't that well-suited to elegantly expressing parallelism. On the other hand, this is exactly where array languages like APL/J/etc. or array-based frameworks like NumPy/PyTorch/etc. really shine.

The list summation task in the post is just a list reduction, and a reduction can automatically be parallelized for any associative operator. The gory parallelization details in the post are only something the user needs to care about in a purely imperative language that lacks native array operations like reduction. In an array language, the `reduce` function can detect whether the reduction operator is associative and if so, automatically handle the parallelization logic behind-the-scenes. Thus `reduce(values, +)` and `reduce(values, *)` would execute seamlessly without the user needing to explicitly implement the exact subdivision of work. On the other hand, `reduce(values, /)` would run in serial, since division is not associative. Custom binary operators would just need to declare whether they're associative (and possibly commutative, depending on how the parallel scheduler works internally), and they'd be parallelized out-of-the-box.

replies(2): >>45540671 #>>45540715 #
2. procaryote ◴[] No.45540671[source]
Sometimes this approach means that you end up with really compact code where you need to do a lot of research and mental modelling to understand if it actually adds up to things being executed in parallel, and if that is good or not.
3. wbpaelias ◴[] No.45540715[source]
If you're willing to let go of imperative syntax, Interaction Nets[0] might be interesting to maximize parallelism where possible. I think Bend[1] is probably the most mature implementation of that idea.

[0]: https://en.wikipedia.org/wiki/Interaction_nets [1]: https://github.com/HigherOrderCO/Bend