←back to thread

287 points shadaj | 1 comments | | HN request time: 0.254s | source
1. lifeisstillgood ◴[] No.43197836[source]
This is a massive coming issue - I am not sure “distributed” can be exactly replaced with “parallel processing” but it’s close

So to simplify, from 1985 to 2005 ish you could keep sequential software exactly the same and it just ran faster each new hardware generation. One CPU but transistors got smaller and (hand wavy, on chip ram, pipelining )

Then roughly around 2010 single CPUs just stopped magically doubling. You got more cores, but that meant parallel or distributed programming - your software that in 1995 served 100 people was the same serving 10,000 people in 2000. But in 2015 we needed new coding - we got NOSQL and map reduce and facebook data centres.

But the hardware kept growing

TSMC now has wafer scale chips with 900,000 cores - but my non parallel, on distributed code won’t run 1 million times faster - Amdahls law just won’t let me

So yeah - no one wants to buy new chips with a million cores because you aren’t going to get the speed ups - why buy an expensive data centre full of 100x cores if you can’t sell them at 100x usage.