←back to thread

460 points pieterr | 8 comments | | HN request time: 1.37s | source | bottom
Show context
__turbobrew__ ◴[] No.42159121[source]
It’s interesting, SICP and other many other “classic” texts talk about designing programs, but these days I think the much more important skill is designing systems.

I don’t know if distributed systems is consider part of “Computer Science” but it is a much more common problem that I see needs to be solved.

I try to write systems in the simplest way possible and then use observability tools to figure out where the design is deficient and then maybe I will pull out a data structure or some other “computer sciency” thing to solve that problem. It turns out that big O notation and runtime complexity doesn’t matter the majority of the time and you can solve most problems with arrays and fast CPUs. And even when you have runtime problems you should profile the program to find the hot spots.

What computer science doesn’t teach you is how memory caching works in CPUs. Your fancy graph algorithm may have good runtime complexity but it completely hoses the CPU cache and you may have been able to go faster with an array with good cache usage.

The much more common problems I have is how to deal with fault tolerance, correctness in distributed locks and queues, and system scalability.

Maybe I am just biased because I have a computer/electrical engineering background.

replies(23): >>42159154 #>>42159177 #>>42159448 #>>42159469 #>>42159581 #>>42159600 #>>42160025 #>>42160394 #>>42160515 #>>42160581 #>>42160656 #>>42161150 #>>42161980 #>>42162797 #>>42163285 #>>42163324 #>>42163910 #>>42164082 #>>42164391 #>>42164509 #>>42164766 #>>42165157 #>>42169617 #
seanmcdirmid ◴[] No.42161980[source]
> but these days I think the much more important skill is designing systems.

It is hard to design systems if you don't have the perspective of implementing them. Yes, you move up the value chain to designing things, no, but no, you don't get to skip gaining experience lower down the value chain.

> What computer science doesn’t teach you is how memory caching works in CPUs.

That was literally my first quarter in my CS undergrad 30 years ago, the old Hennessy and Patterson book, which I believe is still used today. Are things so different now?

> The much more common problems I have is how to deal with fault tolerance, correctness in distributed locks and queues, and system scalability.

All of that was covered in my CS undergrad, I wasn't even in a fancy computer engineering/EE background.

replies(4): >>42162822 #>>42164233 #>>42164383 #>>42164742 #
1. __turbobrew__ ◴[] No.42162822[source]
I think CS 30 years ago was closer to computer engineering today.

At my uni 10 years ago the CS program didn’t touch anything related to hardware, hell the CS program didn’t even need to take multivariable calculus. In my computer engineering program we covered solid state physics, electromagnetism, digital electronics design, digital signals processing, CPU architecture, compiler design, OS design, algorithms, software engineering, distributed systems design.

The computer engineering program took you from solid state physics and transistor design to PAXOS.

The CS program was much more focused on logic proofs and more formalism and they never touched anything hardware adjacent.

I realize this is different between programs, but from what I read and hear many CS programs these days start at Java and never go down abstraction levels.

I do agree with you that learning the fundamentals is important, but I would argue that a SICP type course is not fundamental — physics is fundamental. And once you learn how we use physics to build CPUs you learn that fancy algorithms and complex solutions are not necessary most of the time given how fast computers are today. If you can get your CPU pipelined properly with high cache hits, branch prediction hits, prefetch hits, and SIMD you can easily brute force many problems.

And for those 10% of problems which cannot be brute forced, 90% of those problems can be solved with profiling and memoization, and for the 10% of those problems you cannot solve with memoization you can solve 90% of them with b-trees.

replies(4): >>42162880 #>>42164027 #>>42164039 #>>42166028 #
2. seanmcdirmid ◴[] No.42162880[source]
A top tier CS program is going to make you learn computer architecture along side automata and proofs. MIT went the extra mile with the SICP, it was honestly a hole I didn’t have access to in my top tier program, but I only realized this because I studied PL in grad school. You should go through it if you haven’t, I think it would have made my ugrad experience better and I definitely benefited from a great well rounded curriculum already (UW CSE is still no slouch, but it isn’t MIT!).

If you are into physics and mechanics, then you have to check the SICM (SICP’s less famous cousin) out as well. Again, MIT went the extra mile with that as well.

3. jltsiren ◴[] No.42164027[source]
Computers today are slower than they have ever been. And tomorrow's computers are going to be even slower.

In many applications, the amount of data grows at least as quickly as computer performance. If the time complexity of an algorithm is superlinear, today's computer needs more time to run it with today's data than yesterday's computer did with yesterday's data. Algorithms that used to be practical get more and more expensive to run, until they eventually become impractical.

The more data you have, the more you have to think about algorithms. Brute-forcing can be expensive in terms of compute costs and wall-clock time, while low-level optimizations can take a lot of developer time.

replies(1): >>42164524 #
4. richiebful1 ◴[] No.42164039[source]
We (at a public research university in the US) designed a rudimentary CPU, wrote mips assembly, and understood computer architecture for our CS degree. I graduated 6 years ago

Edit: we also did formal methods and proofs as part of core curriculum

5. nyolfen ◴[] No.42164524[source]
if we're talking about relative human-experienced performance, the slowest computers i ever owned were in the early-mid 00s. they sped up as multicore and ssd's entered the picture and plateaued about ten years ago ime.
replies(1): >>42165372 #
6. WillAdams ◴[] No.42165372{3}[source]
The most striking performance observation of my experience was that Apple took a system, OpenStep 4.2 which ran okay on a 33 MHz 68040 (and acceptably on my 25MHz Cube) and made it run only a little bit better on a 400MHz G3 as the Mac OS X Public Beta.

The difference of course was anti-aliasing, and much greater bit depth, and running multiple programming environments/toolkits (Carbon and Java).

replies(1): >>42180437 #
7. chamomeal ◴[] No.42166028[source]
The CS degree at my school was pretty much just Java. Mostly UIs with java. And applets lol.

The only kids who learned anything else learned C++ so they could get jobs with DOD contractors

8. ◴[] No.42180437{4}[source]