←back to thread

837 points turrini | 3 comments | | HN request time: 1.007s | source
Show context
titzer ◴[] No.43971962[source]
I like to point out that since ~1980, computing power has increased about 1000X.

If dynamic array bounds checking cost 5% (narrator: it is far less than that), and we turned it on everywhere, we could have computers that are just a mere 950X faster.

If you went back in time to 1980 and offered the following choice:

I'll give you a computer that runs 950X faster and doesn't have a huge class of memory safety vulnerabilities, and you can debug your programs orders of magnitude more easily, or you can have a computer that runs 1000X faster and software will be just as buggy, or worse, and debugging will be even more of a nightmare.

People would have their minds blown at 950X. You wouldn't even have to offer 1000X. But guess what we chose...

Personally I think the 1000Xers kinda ruined things for the rest of us.

replies(20): >>43971976 #>>43971990 #>>43972050 #>>43972107 #>>43972135 #>>43972158 #>>43972246 #>>43972469 #>>43972619 #>>43972675 #>>43972888 #>>43972915 #>>43973104 #>>43973584 #>>43973716 #>>43974422 #>>43976383 #>>43977351 #>>43978286 #>>43978303 #
_aavaa_ ◴[] No.43972050[source]
Except we've squandered that 1000x not on bounds checking but on countless layers of abstractions and inefficiency.
replies(6): >>43972103 #>>43972130 #>>43972215 #>>43974876 #>>43976159 #>>43983438 #
1. grumpymuppet ◴[] No.43972130[source]
This is something I've wished to eliminate too. Maybe we just cast the past 20 years as the "prototyping phase" of modern infrastructure.

It would be interesting to collect a roadmap for optimizing software at scale -- where is there low hanging fruit? What are the prime "offenders"?

Call it a power saving initiative and get environmentally-minded folks involved.

replies(2): >>43972912 #>>43976066 #
2. sgarland ◴[] No.43972912[source]
IMO, the prime offender is simply not understanding fundamentals. From simple things like “a network call is orders of magnitude slower than a local disk, which is orders of magnitude slower than RAM…” (and moreover, not understanding that EBS et al. are networked disks, albeit highly specialized and optimized), or doing insertions to a DB by looping over a list and writing each row individually.

I have struggled against this long enough that I don’t think there is an easy fix. My current company is the first I’ve been at that is taking it seriously, and that’s only because we had a spate of SEV0s. It’s still not easy, because a. I and the other technically-minded people have to find the problems, then figure out how to explain them b. At its heart, it’s a culture war. Properly normalizing your data model is harder than chucking everything into JSON, even if the former will save you headaches months down the road. Learning how to profile code (and fix the problems) may not be exactly hard, but it’s certainly harder than just adding more pods to your deployment.

3. mike_hearn ◴[] No.43976066[source]
Use of underpowered databases and abstractions that don't eliminate round-trips is a big one. The hardware is fast but apps take seconds to load because on the backend there's a lot of round-trips to the DB and back, and the query mix is unoptimized because there are no DBAs anymore.

It's the sort of thing that can be handled via better libraries, if people use them. Instead of Hibernate use a mapper like Micronaut Data. Turn on roundtrip diagnostics in your JDBC driver, look for places where they can be eliminated by using stored procedures. Have someone whose job is to look out for slow queries and optimize them, or pay for a commercial DB that can do that by itself. Also: use a database that lets you pipeline queries on a connection and receive the results asynchronously, along with server languages that make it easy to exploit that for additional latency wins.