←back to thread

Multi-Core by Default

(www.rfleury.com)
70 points kruuuder | 10 comments | | HN request time: 0.409s | source | bottom
1. jerf ◴[] No.45539056[source]
If the author has not already, I would commend to them a search of the literature (or the relevant blog summaries) for the term "implicit parallelism". This was an academic topic from a few years back (my brain does not do this sort of thing very well but I want to say 10-20 years go) where the hope was that we could just fire some sort of optimization technique at normal code which would automatically extract all the implicit parallelism in the code and parallelize it, resulting in massive essentially-free gains.

In order to do this, the first thing that was done was to analyze existing source code and determine what the maximum amount of implicit parallelism was that was in the code, assuming it was free. This attempt then basically failed right here. Intuitively we all expect that our code has tons of implicitly parallelism that can be exploited. It turns out our intuition is wrong, and the maximum amount of parallelism that was extracted was often in the 2x range, which even if the parallelization was free it was only a marginal improvement.

Moreover, it is also often not something terribly amenable to human optimization either.

A game engine might be the best case scenario for this sort of code, but once you start putting in the coordination costs back into the charts those charts start looking a lot less impressive in practice. I have a sort of rule of thumb that the key to high-performance multithreading is that the cost of the payload of a given bit of coordination overhead needs to be substantially greater than the cost the coordination, and a games engine will not necessarily have that characteristic... it may have lots of tasks to be done in parallel, but if they

replies(4): >>45539945 #>>45540340 #>>45541602 #>>45542254 #
2. tomsmeding ◴[] No.45539945[source]
There is some recent work on this too: https://dl.acm.org/doi/10.1145/3632880
3. jayd16 ◴[] No.45540340[source]
I guess you can argue that instruction reordering, SMT/Hyper-threading are already eating the easy wins there. And as you said, it seems like the gains taper off at 2x.

I'm not sure why games would be a good target. They're traditionally very much tied to a single thread, because ironically, passing data to the graphics and display hardware and to multi threaded subroutines like physics all has to be synchronized.

The easiest way to do that without locking a bunch of threads is to let a single thread go as fast as possible through all that main thread work.

If you really want a game focused parallelization framework, look into the Entity Component System pattern. The developer defines the data and mutability flow of various features in the game.

Because the execution ordering is fully known, the frameworks can chunk, schedule, reorder, and fan-out, etc the work across threads with less waiting or cache misses.

replies(3): >>45541199 #>>45541223 #>>45541462 #
4. jerf ◴[] No.45541199[source]
"I'm not sure why games would be a good target..."

"If you really want a game focused parallelization framework, look into the Entity Component System pattern."

Exactly that. You can break a lot modern games nicely into a lot of little things being done to discrete entities very quickly. But there the problem is that it's easy for the things to be too small, meaning you don't have a lot of time to be "clever" in the code.

I'm ignoring the GPU and just looking at CPU for this. GPU is in its own world where parallelization is forced on you comprehensively, in a space forced to be amenable to that.

replies(1): >>45542940 #
5. rcxdude ◴[] No.45541223[source]
Yeah, ECS is the approach I've heard can get games to have sufficient parallelism. Though only if you are careful about sticking to it properly, and with careful management of the dataflow between systems. I think the main thing is, apart from the difficulty of doing the analysis in the first place, if you're writing the code without thinking about parallelism, you will tend to introduce data dependcies all over the place and changing that structure is very difficult.
replies(1): >>45543013 #
6. _aavaa_ ◴[] No.45541462[source]
ECS is not a game specific thing. It’s a redrawing of encapsulation boundaries which can be applied to systems broadly.

https://youtube.com/watch?v=wo84LFzx5nI

7. zackmorris ◴[] No.45541602[source]
One thing that might help is that state space explosion is mainly caused by mutability. Because each mutation potentially creates branching that can be difficult or impossible to statically analyze.

In other words, imperative programming with const is directly transpilable to/from functional programming.

Using modern techniques like higher order methods, scatter-gather arrays (similar to map-reduce), passing by value via copy-on-write, etc, code can be written that works like piping data between unix executables. Everything becomes a spreadsheet basically.

Which also has implications for multiprocessing. When all data is const, many of the async edge cases that we're normally forced to deal with go away. Loosely that means that we can use techniques similar to double buffering to create new mutated copies where only the part that changed was actually copied. The rest of the data can reference one source of truth. In practice, this turns out to most closely approximate optimal algorithms, where the best-effort imperative code may paint itself into a corner because logic design choices get dictated by the realities of the runtime instead of the problem domain itself, so the optimizer loses opportunities to simplify it.

I find that in my daily work, nearly all of my time is spent untangling highly-imperative mutable code. Because I must hold the entire state in my mind when mutable references (mostly to objects) are passed between functions. My higher-order method solutions often end up shorter, more readable, faster, even more efficient, but harder to explain to junior developers. So I would really like a compiler that implicitly converts imperative code like for() loops and references into const/functional code whose intermediate code (i-code) can be reduced to its simplest form.

This is also my main concern about Rust and other mainstream imperative languages like C# and even Javascript, that they encourage bare-hands methods that go away with const/functional code if the user is willing to accept a doubling or more of memory usage. Which becomes less of an issue over time as memory cost decreases.

Edit: I maybe should have said fork-join instead of scatter-gather, but they are related concepts.

8. variadix ◴[] No.45542254[source]
What about pipelining? Something that has clear serial dependencies could still be pipelined to execute separate (but dependent at work boundaries) units of work in parallel.
9. jayd16 ◴[] No.45542940{3}[source]
Sure there's work you can throw into ECS but its a paradigm shift that is not implicit and also highlights how much doesn't work.
10. jayd16 ◴[] No.45543013{3}[source]
Agreed. It exemplifies what parallelism is on the table but also how many more guarantees need to be enforced to get it.

You're almost swinging the pendulum back to a fixed pipeline and I don't think you can get that for free.