←back to thread

837 points turrini | 4 comments | | HN request time: 0s | source
Show context
titzer ◴[] No.43971962[source]
I like to point out that since ~1980, computing power has increased about 1000X.

If dynamic array bounds checking cost 5% (narrator: it is far less than that), and we turned it on everywhere, we could have computers that are just a mere 950X faster.

If you went back in time to 1980 and offered the following choice:

I'll give you a computer that runs 950X faster and doesn't have a huge class of memory safety vulnerabilities, and you can debug your programs orders of magnitude more easily, or you can have a computer that runs 1000X faster and software will be just as buggy, or worse, and debugging will be even more of a nightmare.

People would have their minds blown at 950X. You wouldn't even have to offer 1000X. But guess what we chose...

Personally I think the 1000Xers kinda ruined things for the rest of us.

replies(20): >>43971976 #>>43971990 #>>43972050 #>>43972107 #>>43972135 #>>43972158 #>>43972246 #>>43972469 #>>43972619 #>>43972675 #>>43972888 #>>43972915 #>>43973104 #>>43973584 #>>43973716 #>>43974422 #>>43976383 #>>43977351 #>>43978286 #>>43978303 #
_aavaa_ ◴[] No.43972050[source]
Except we've squandered that 1000x not on bounds checking but on countless layers of abstractions and inefficiency.
replies(6): >>43972103 #>>43972130 #>>43972215 #>>43974876 #>>43976159 #>>43983438 #
pydry ◴[] No.43972103[source]
Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.

The market mostly didn't want 50% faster code as much as it wanted an app that didn't exist before.

If I look at the apps I use on a day to day basis that are dog slow and should have been optimized (e.g. slack, jira), it's not really a lack of the industry's engineering capability to speed things up that was the core problem, it is just an instance the principal-agent problem - i.e. I'm not the one buying, I don't get to choose not to use it and dog-slow is just one of many the dimensions in which they're terrible.

replies(3): >>43972127 #>>43972262 #>>43975855 #
ffsm8 ◴[] No.43972262[source]
> Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.

Really? Because while abstractions like that exist (i.e. a webserver frameworks, reactivity, SQL and ORMs etc), I would argue that these aren't the abstractions that cause the most maintenance and performance issues. These are usually in the domain/business application and often not something that made anything quicker to develop or anything, but instead created by a developer that just couldn't help themselves

replies(2): >>43972341 #>>43972785 #
1. tonyarkles ◴[] No.43972341[source]
I think they’re referring to Electron.

Edit: and probably writing backends in Python or Ruby or JavaScript.

replies(1): >>43972456 #
2. Zak ◴[] No.43972456[source]
The backend programming language usually isn't a significant bottleneck; running dozens of database queries in sequence is the usual bottleneck, often compounded by inefficient queries, inappropriate indexing, and the like.
replies(1): >>43973012 #
3. sgarland ◴[] No.43973012[source]
Yep. I’m a DBRE, and can confirm, it’s almost always the DB, with the explicit caveat that it’s also rarely the fault of the DB itself, but rather the fault of poor schema and query design.

Queries I can sometimes rewrite, and there’s nothing more satisfying than handing a team a 99% speed-up with a couple of lines of SQL. Sometimes I can’t, and it’s both painful and frustrating to explain that the reason the dead-simple single-table SELECT is slow is because they have accumulated billions of rows that are all bloated with JSON and low-cardinality strings, and short of at a minimum table partitioning (with concomitant query rewrites to include the partition key), there is nothing anyone can do. This has happened on giant instances, where I know the entire working set they’re dealing with is in memory. Computers are fast, but there is a limit.

The other way the DB gets blamed is row lock contention. That’s almost always due to someone opening a transaction (e.g. SELECT… FOR UPDATE) and then holding it needlessly while doing other stuff, but sometimes it’s due to the dev not being aware of the DB’s locking quirks, like MySQL’s use of gap locks if you don’t include a UNIQUE column as a search predicate. Read docs, people!

replies(1): >>43973274 #
4. Zak ◴[] No.43973274{3}[source]
It seems to me most developers don't want to learn much about the database and would prefer to hide it behind the abstractions used by their language of choice. I can relate to a degree; I was particularly put off by SQL's syntax (and still dislike it), but eventually came to see the value of leaning into the database's capabilities.