In order to do this, the first thing that was done was to analyze existing source code and determine what the maximum amount of implicit parallelism was that was in the code, assuming it was free. This attempt then basically failed right here. Intuitively we all expect that our code has tons of implicitly parallelism that can be exploited. It turns out our intuition is wrong, and the maximum amount of parallelism that was extracted was often in the 2x range, which even if the parallelization was free it was only a marginal improvement.
Moreover, it is also often not something terribly amenable to human optimization either.
A game engine might be the best case scenario for this sort of code, but once you start putting in the coordination costs back into the charts those charts start looking a lot less impressive in practice. I have a sort of rule of thumb that the key to high-performance multithreading is that the cost of the payload of a given bit of coordination overhead needs to be substantially greater than the cost the coordination, and a games engine will not necessarily have that characteristic... it may have lots of tasks to be done in parallel, but if they