Most active commenters
  • maccard(16)
  • vel0city(14)
  • sorcerer-mar(10)
  • CyberDildonics(10)
  • ryao(9)
  • Aurornis(7)
  • monkeyelite(6)
  • dijit(5)
  • sgarland(5)
  • bluGill(5)

←back to thread

837 points turrini | 284 comments | | HN request time: 4.915s | source | bottom
1. titzer ◴[] No.43971962[source]
I like to point out that since ~1980, computing power has increased about 1000X.

If dynamic array bounds checking cost 5% (narrator: it is far less than that), and we turned it on everywhere, we could have computers that are just a mere 950X faster.

If you went back in time to 1980 and offered the following choice:

I'll give you a computer that runs 950X faster and doesn't have a huge class of memory safety vulnerabilities, and you can debug your programs orders of magnitude more easily, or you can have a computer that runs 1000X faster and software will be just as buggy, or worse, and debugging will be even more of a nightmare.

People would have their minds blown at 950X. You wouldn't even have to offer 1000X. But guess what we chose...

Personally I think the 1000Xers kinda ruined things for the rest of us.

replies(20): >>43971976 #>>43971990 #>>43972050 #>>43972107 #>>43972135 #>>43972158 #>>43972246 #>>43972469 #>>43972619 #>>43972675 #>>43972888 #>>43972915 #>>43973104 #>>43973584 #>>43973716 #>>43974422 #>>43976383 #>>43977351 #>>43978286 #>>43978303 #
2. ngangaga ◴[] No.43971976[source]
I don't think it's that deep. We are just stuck with browsers now, for better and worse. Everything else trails.
replies(1): >>43972138 #
3. vrighter ◴[] No.43971990[source]
Don't forget the law of large numters. 5% performance hit on one system is one thing, 5% across almost all of the current computing landscape is still a pretty huge value.
replies(2): >>43972020 #>>43972063 #
4. titzer ◴[] No.43972020[source]
It's about 5%.

Cost of cyberattacks globally[1]: O($trillions)

Cost of average data breach[2][3]: ~$4 million

Cost of lost developer productivity: unknown

We're really bad at measuring the secondary effects of our short-sightedness.

[1] https://iotsecurityfoundation.org/time-to-fix-our-digital-fo...

[2] https://www.internetsociety.org/resources/doc/2023/how-to-ta...

[3] https://www.ibm.com/reports/data-breach

replies(1): >>43979329 #
5. _aavaa_ ◴[] No.43972050[source]
Except we've squandered that 1000x not on bounds checking but on countless layers of abstractions and inefficiency.
replies(6): >>43972103 #>>43972130 #>>43972215 #>>43974876 #>>43976159 #>>43983438 #
6. pron ◴[] No.43972063[source]
But it's not free for the taking. The point is that we'd get more than that 5%'s worth in exchange. So sure, we'll get significant value "if software optimization was truly a priority", but we get even more value by making other things a priority.

Saying "if we did X we'd get a lot in return" is similar to the fallacy of inverting logical implication. The question isn't, will doing something have significant value, but rather, to get the most value, what is the thing we should do? The answer may well be not to make optimisation a priority even if optimisation has a lot of value.

replies(1): >>43972157 #
7. pydry ◴[] No.43972103[source]
Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.

The market mostly didn't want 50% faster code as much as it wanted an app that didn't exist before.

If I look at the apps I use on a day to day basis that are dog slow and should have been optimized (e.g. slack, jira), it's not really a lack of the industry's engineering capability to speed things up that was the core problem, it is just an instance the principal-agent problem - i.e. I'm not the one buying, I don't get to choose not to use it and dog-slow is just one of many the dimensions in which they're terrible.

replies(3): >>43972127 #>>43972262 #>>43975855 #
8. fsloth ◴[] No.43972107[source]
The problem is 1000xers are a rarity.

The software desktop users have to put up with is slow.

replies(1): >>43972609 #
9. fsloth ◴[] No.43972127{3}[source]
I don’t think abundance vs speed is the right lens.

No user actually wants abundance. They use few programs and would benwfit if those programs were optimized.

Established apps could be optimized to the hilt.

But they seldom are.

replies(2): >>43972199 #>>43972414 #
10. grumpymuppet ◴[] No.43972130[source]
This is something I've wished to eliminate too. Maybe we just cast the past 20 years as the "prototyping phase" of modern infrastructure.

It would be interesting to collect a roadmap for optimizing software at scale -- where is there low hanging fruit? What are the prime "offenders"?

Call it a power saving initiative and get environmentally-minded folks involved.

replies(2): >>43972912 #>>43976066 #
11. justincormack ◴[] No.43972135[source]
Most programming languages have array bounds checking now.
replies(1): >>43972762 #
12. slowmovintarget ◴[] No.43972138[source]
We're stuck with browsers now until the primary touch with the internet is assistants / agent UIs / chat consoles.

That could end up being Electron (VS Code), though that would be a bit sad.

replies(2): >>43972185 #>>43972270 #
13. vrighter ◴[] No.43972157{3}[source]
depends on whether the fact that software can be finished will ever be accepted. If you're constantly redeveloping the same thing to "optimize and streamline my experience" (please don't) then yes, the advantage is dubious. But if not, then the saved value in operating costs keeps increasing as time goes on. It won't make much difference in my homelab, but at datacenter scale it does
replies(1): >>43972321 #
14. scotty79 ◴[] No.43972158[source]
Since 1980 maybe. But since 2005 it increased maybe 5x and even that's generous. And that's half of the time that passed and two decades.

https://youtu.be/m7PVZixO35c?si=px2QKP9-80hDV8Ui

replies(2): >>43975037 #>>43983023 #
15. scotty79 ◴[] No.43972185{3}[source]
I don't think we are gonna go there. Talking is cumbersome. There's a reason, besides social anxiety that people prefer to use self-checkout and electronically order fastfood. There are easier ways to do a lot of things than with words.

I'd bet on maybe ad hoc ai designed ui-s you click but have a voice search when you are confused about something.

replies(3): >>43972233 #>>43972590 #>>43975552 #
16. pydry ◴[] No.43972199{4}[source]
>No user actually wants abundance.

No, all users just want the few programs which they themselves need. The market is not one user, though. It's all of them.

replies(2): >>43972407 #>>43972495 #
17. Gigachad ◴[] No.43972215[source]
Am I taking crazy pills or are programs not nearly as slow as HN comments make them out to be? Almost everything loads instantly on my 2021 MacBook and 2020 iPhone. Every program is incredibly responsive. 5 year old mobile CPUs load modern SPA web apps with no problems.

The only thing I can think of that’s slow is Autodesk Fusion starting up. Not really sure how they made that so bad but everything else seems super snappy.

replies(40): >>43972245 #>>43972248 #>>43972259 #>>43972269 #>>43972273 #>>43972292 #>>43972294 #>>43972349 #>>43972354 #>>43972450 #>>43972466 #>>43972520 #>>43972548 #>>43972605 #>>43972640 #>>43972676 #>>43972867 #>>43972937 #>>43973040 #>>43973065 #>>43973220 #>>43973431 #>>43973492 #>>43973705 #>>43973897 #>>43974192 #>>43974413 #>>43975741 #>>43975999 #>>43976270 #>>43976554 #>>43978315 #>>43978579 #>>43981119 #>>43981143 #>>43981157 #>>43981178 #>>43981196 #>>43983337 #>>43984465 #
18. ◴[] No.43972233{4}[source]
19. mjburgess ◴[] No.43972245{3}[source]
People conflat the insanity of running a network cable through every application with the poor performance of their computers.
replies(1): >>43972796 #
20. ngneer ◴[] No.43972246[source]
I agree with the sentiment and analysis that most humans prefer short term gains over long term ones. One correction to your example, though. Dynamic bounds checking does not solve security. And we do not know of a way to solve security. So, the gains are not as crisp as you are making them seem.
replies(3): >>43972540 #>>43972554 #>>43989097 #
21. sorcerer-mar ◴[] No.43972248{3}[source]
I think it's a very theoretical argument: we could of course theoretically make everything even faster. It's nowhere near the most optimal use of the available hardware. All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."
replies(2): >>43973093 #>>43973523 #
22. high_na_euv ◴[] No.43972259{3}[source]
Yup, people run software on shitty computers and blame all the software.

The only slow (local) software I know is llvm and cpp compilers

Other are pretty fast

replies(1): >>43973540 #
23. ffsm8 ◴[] No.43972262{3}[source]
> Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.

Really? Because while abstractions like that exist (i.e. a webserver frameworks, reactivity, SQL and ORMs etc), I would argue that these aren't the abstractions that cause the most maintenance and performance issues. These are usually in the domain/business application and often not something that made anything quicker to develop or anything, but instead created by a developer that just couldn't help themselves

replies(2): >>43972341 #>>43972785 #
24. flohofwoe ◴[] No.43972269{3}[source]
I guess you don't need to wrestle with Xcode?

Somehow the Xcode team managed to make startup and some features in newer Xcode versions slower than older Xcode versions running on old Intel Macs.

E.g. the ARM Macs are a perfect illustration that software gets slower faster than hardware gets faster.

After a very short 'free lunch' right after the Intel => ARM transition we're now back to the same old software performance regression spiral (e.g. new software will only be optimized until it feels 'fast enough', and that 'fast enough' duration is the same no matter how fast the hardware is).

Another excellent example is the recent release of the Oblivion Remaster on Steam (which uses the brand new UE5 engine):

On my somewhat medium-level PC I have to reduce the graphics quality in the Oblivion Remaster so much that the result looks worse than 14-year old Skyrim (especially outdoor environments), and that doesn't even result in a stable 60Hz frame rate, while Skyrim runs at a rock-solid 60Hz and looks objectively better in the outdoors.

E.g. even though the old Skyrim engine isn't by far as technologically advanced as UE5 and had plenty of performance issues at launch on a ca. 2010 PC, the Oblivion Remaster (which uses a "state of the art" engine) looks and performs worse than its own 14 years old predecessor.

I'm sure the UE5-based Oblivion remaster can be properly optimized to beat Skyrim both in looks and performance, but apparently nobody cared about that during development.

replies(1): >>43973055 #
25. ngangaga ◴[] No.43972270{3}[source]
I think it'd be pretty funny if to book travel in 2035 you need to use a travel agent that's objectively dumber than a human. We'd be stuck in the eighties again, but this time without each other to rely on.

Of course, that would be suicide for the industry. But I'm not sure investors see that.

26. mschild ◴[] No.43972273{3}[source]
A mix of both. There are large number of websites that are inefficiently written using up unnecessary amounts of resources. Semi-modern devices make up for that by just having a massive amount of computing power.

However, you also need to consider 2 additional factors. Macbooks and iPhones, even 4 year old ones, have usually been at the upper end of the scale for processing power. (When compared to the general mass-market of private end-consumer devices)

Try doing the same on a 4 year old 400 Euro laptop and it might look a bit different. Also consider your connection speed and latency. I usually have no loading issue either. But I have a 1G fiber connection. My parents don't.

27. _aavaa_ ◴[] No.43972292{3}[source]
I'd wager that a 2021 MacBook, like the one I have, is stronger than the laptop used by majority of people in the world.

Life on an entry or even mid level windows laptop is a very different world.

replies(3): >>43972406 #>>43972522 #>>43975444 #
28. tjader ◴[] No.43972294{3}[source]
I just clicked on the network icon next to the clock on a Windows 11 laptop. A gray box appeared immediately, about one second later all the buttons for wifi, bluetooth, etc appeared. Windows is full of situations like this, that require no network calls, but still take over one second to render.
replies(4): >>43973061 #>>43973911 #>>43973999 #>>43975898 #
29. pron ◴[] No.43972321{4}[source]
Even the fact that value keeps increasing doesn't mean it's a good idea. It's a good idea if it keeps increasing more than other value. If a piece of software is more robust against attacks then the value in that also keeps increasing over time, possibly more than the cost in hardware. If a piece of software is easier to add features to, then that value also keeps increasing over time.

If what we're asking is whether value => X, i.e. to get the most value we should do X, you cannot answer that in the positive by proving X => value. If optimising something is worth a gazillion dollars, you still should not do it if doing something else is worth two gazillion dollars.

30. tonyarkles ◴[] No.43972341{4}[source]
I think they’re referring to Electron.

Edit: and probably writing backends in Python or Ruby or JavaScript.

replies(1): >>43972456 #
31. subjectsigma ◴[] No.43972349{3}[source]
I have a 2019 Intel MacBook and Outlook takes about five seconds to load and constantly sputters
32. CelestialMystic ◴[] No.43972354{3}[source]
You are using a relatively high end computer and mobile device. Go and find a cheap laptop x86 and try doing the same. It will be extremely painful. Most of this is due to a combination of Windows 11 being absolute trash and JavaScript being used extensively in applications/websites. JavaScript is memory hog and can be extremely slow depending on how it is written (how you deal with loops massively affects the performance).

What is frustrating though that until relatively recently these devices would work fine with JS heavy apps and work really well with anything that is using a native toolkit.

33. josephg ◴[] No.43972406{4}[source]
Yep. Developers make programs run well enough on the hardware sitting on our desks. So long as we’re well paid (and have decent computers ourselves), we have no idea what the average computing experience is for people still running 10yo computers which were slow even for the day. And that keeps the treadmill going. We make everyone need to upgrade every few years.

A few years ago I accidentally left my laptop at work on a Friday afternoon. Instead of going into the office, I pulled out a first generation raspberry pi and got everything set up on that. Needless to say, our nodejs app started pretty slowly. Not for any good reason - there were a couple modules which pulled in huge amounts of code which we didn’t use anyway. A couple hours work made the whole app start 5x faster and use half the ram. I would never have noticed that was a problem with my snappy desktop.

replies(1): >>43975470 #
34. skydhash ◴[] No.43972407{5}[source]
But each vendor only develop a few software and generally supports only three platforms -/+ one. It’s so damning when I see projects reaching out for electron, when they only support macOS and Windows. And software like Slack has no excuse for being this slow on anything other than latest gen cpu and 1gb internet connection.
replies(1): >>43973648 #
35. infogulch ◴[] No.43972414{4}[source]
> They use few programs

Yes but it's a different 'few programs' than 99% of all other users, so we're back to square one.

36. g-mork ◴[] No.43972450{3}[source]
Lightroom non-user detected
37. Zak ◴[] No.43972456{5}[source]
The backend programming language usually isn't a significant bottleneck; running dozens of database queries in sequence is the usual bottleneck, often compounded by inefficient queries, inappropriate indexing, and the like.
replies(1): >>43973012 #
38. alnwlsn ◴[] No.43972466{3}[source]
It depends. Can Windows 3.11 be faster than Windows 11? Sure, maybe even in most cases: https://jmmv.dev/2023/06/fast-machines-slow-machines.html
39. dist-epoch ◴[] No.43972469[source]
It's more like 100,000X.

Just the clockspeed increased 1000X, from 4 MHz to 4 GHz.

But then you have 10x more cores, 10x more powerful instructions (AVX), 10x more execution units per core.

40. bluGill ◴[] No.43972495{5}[source]
Users only want 5% of the features of the few programs they use. However everyone has a different list of features and a different list of programs. And so to get a market you need all the features on all the programs.
41. xnorswap ◴[] No.43972520{3}[source]
It vastly depends on what software you're forced to use.

Here's some software I use all the time, which feels horribly slow, even on a new laptop:

Slack.

Switching channels on slack, even when you've just switched so it's all cached, is painfully slow. I don't know if they build in a 200ms or so delay deliberately to mask when it's not cached, or whether it's some background rendering, or what it is, but it just feels sluggish.

Outlook

Opening an email gives a spinner before it's opened. Emails are about as lightweight as it gets, yet you get a spinner. It's "only" about 200ms, but that's still 200ms of waiting for an email to open. Plain text emails were faster 25 years ago. Adding a subset of HTML shouldn't have caused such a massive regression.

Teams

Switching tabs on teams has the same delayed feeling as Slack. Every iteraction feels like it's waiting 50-100ms before actioning. Clicking an empty calendar slot to book a new event gives 30-50ms of what I've mentally internalised as "Electron blank-screen" but there's probably a real name out there for basically waiting for a new dialog/screen to even have a chrome, let alone content. Creating a new calendar event should be instant, it should not take 300-500ms or so of waiting for the options to render.

These are basic "productivity" tools in which every single interaction feels like it's gated behind at least a 50ms debounce waiting period, with often extra waiting for content on top.

Is the root cause network hops or telemetry? Is it some corporate antivirus stealing the computer's soul?

Ultimately the root cause doesn't actually matter, because no matter the cause, it still feels like I'm wading through treacle trying to interact with my computer.

replies(3): >>43972617 #>>43974933 #>>43975990 #
42. thfuran ◴[] No.43972522{4}[source]
I've found so many performance issues at work by booting up a really old laptop or working remotely from another continent. It's pretty straightforward to simulate either poor network conditions or generally low performance hardware, but we just don't generally bother to chase down those issues.
replies(1): >>43972720 #
43. HappMacDonald ◴[] No.43972540[source]
You don't have to "solve" security in order to improve security hygiene by a factor of X, and thus risk of negative consequences by that same factor of X.
replies(1): >>43991085 #
44. makeitdouble ◴[] No.43972548{3}[source]
To note, people will have wildly different tolerance to delays and lag.

On the extreme, my retired parents don't feel the difference between 5s or 1s when loading a window or clicking somewhere. I offered a switch to a new laptop, cloning their data, and they didn't give a damn and just opened the laptop the closest to them.

Most people aren't that desensitized, but for some a 600ms delay is instantaneous when for other it's 500ms too slow.

45. bluGill ◴[] No.43972554[source]
Bounds checking solves one tiny subset of security. There are hundreds of other subsets that we know how to solve. However these days the majority of the bad attacks are social and no technology is likely to solve them - as more than 10,000 years of history of the same attack has shown. Technology makes the attacks worse because they now scale, but social attacks have been happening for longer than recorded history (well there is every reason to believe that - there is unlikely to evidence going back that far).
replies(1): >>43975205 #
46. bluGill ◴[] No.43972590{4}[source]
If you know what you want then not talking to a human is faster. However if you are not sure a human can figure out. I'm not sure I'd trust a voice assistant - the value in the human is an informed opinion which is hard to program, but it is easy to program a recommendation for whatever makes the most profit. Of course humans often don't have an informed opinion either, but at least sometimes they do, and they will also sometimes admit it when they don't.
replies(1): >>43973164 #
47. maccard ◴[] No.43972605{3}[source]
Slack, teams, vs code, miro, excel, rider/intellij, outlook, photoshop/affinity are all applications I use every day that take 20+ seconds to launch. My corporate VPN app takes 30 seconds to go from a blank screen to deciding if it’s going to prompt me for credentials or remember my login, every morning. This is on an i9 with 64GB ram, and 1GN fiber.

On the website front - Facebook, twitter, Airbnb, Reddit, most news sites, all take 10+ seconds to load or be functional, and their core functionality has regressed significantly in the last decade. I’m not talking about features that I prefer, but as an example if you load two links in Reddit in two different tabs my experience has been that it’s 50/50 if they’ll actually both load or if one gets stuck either way skeletons.

replies(11): >>43972862 #>>43972991 #>>43974559 #>>43975093 #>>43975226 #>>43975364 #>>43976220 #>>43976593 #>>43978681 #>>43981815 #>>43984373 #
48. HappMacDonald ◴[] No.43972609[source]
You can always install DOS as your daily driver and run 1980's software on any hardware from the past decade, and then tell me how that's slow.

1000x referred to the hardware capability, and that's not a rarity that is here.

The trouble is how software has since wasted a majority of that performance improvement.

Some of it has been quality of life improvements, leading nobody to want to use 1980s software or OS when newer versions are available.

But the lion's share of the performance benefit got chucked into the bin with poor design decisions, layers of abstractions, too many resources managed by too many different teams that never communicate making any software task have to knit together a zillion incompatible APIs, etc.

replies(2): >>43973086 #>>43989387 #
49. maccard ◴[] No.43972617{4}[source]
I’d take 50ms but in my experience it’s more like 250.
replies(1): >>43972865 #
50. noobermin ◴[] No.43972619[source]
The first reply is essentially right. This isn't what happened at all, just because C is still prevalent. All the inefficiency is everything down the stack, not in C.
51. viraptor ◴[] No.43972640{3}[source]
What timescale are we talking about? Many DOS stock and accounting applications were basically instantaneous. There are some animations on iPhone that you can't disable that take longer than a series of keyboard actions of a skilled operator in the 90s. Windows 2k with a stripped shell was way more responsive that today's systems as long as you didn't need to hit the harddrives.

The "instant" today is really laggy compared to what we had. Opening Slack takes 5s on a flagship phone and opening a channel which I just had open and should be fully cached takes another 2s. When you type in JIRA the text entry lags and all the text on the page blinks just a tiny bit (full redraw). When pages load on non-flagship phones (i.e. most of the world), they lag a lot, which I can see on monitoring dashboards.

52. card_zero ◴[] No.43972675[source]
I don't trust that shady-looking narrator. 5% of what exactly? Do you mean that testing for x >= start and < end is only 5% as expensive as assigning an int to array[x]?

Or would bounds checking in fact more than double the time to insert a bunch of ints separately into the array, testing where each one is being put? Or ... is there some gimmick to avoid all those individual checks, I don't know.

replies(2): >>43975158 #>>43989229 #
53. sgarland ◴[] No.43972676{3}[source]
I’m sure you know this, but a reminder that modern devices cache a hell of a lot, even when you “quit” such that subsequent launches are faster. Such is the benefit of more RAM.

I could compare Slack to, say, HexChat (or any other IRC client). And yeah, it’s an unfair comparison in many ways – Slack has far more capabilities. But from another perspective, how many of them do you immediately need at launch? Surely the video calling code could be delayed until after the main client is up, etc. (and maybe it is, in which case, oh dear).

A better example is Visual Studio [0], since it’s apples to apples.

[0]: https://youtu.be/MR4i3Ho9zZY

replies(1): >>43976402 #
54. _aavaa_ ◴[] No.43972720{5}[source]
Oh yeah, I didn't even touch on devs being used to working on super faster internet.

If you're on Mac, go install Network Link Conditioner and crank that download an upload speed way down. (Xcode > Open Developer Tools > More Developer Tools... > "Additional Tools for Xcode {Version}").

55. oblio ◴[] No.43972762[source]
Most programming languages are written in C, which doesn't.

Fairly sure that was OP's point.

replies(1): >>43986597 #
56. Tainnor ◴[] No.43972785{4}[source]
> ORMs

Certain ORMs such as Rails's ActiveRecord are part of the problem because they create the illusion that local memory access and DB access are the same thing. This can lead to N+1 queries and similar issues. The same goes for frameworks that pretend that remote network calls are just a regular method access (thankfully, such frameworks seem to have become largely obsolete).

replies(1): >>43973711 #
57. sgarland ◴[] No.43972796{4}[source]
Correction: devs have made the mistake of turning everything into remote calls, without having any understanding as to the performance implications of doing so.

Sonos’ app is a perfect example of this. The old app controlled everything locally, since the speakers set up their own wireless mesh network. This worked fantastically well. Someone at Sonos got the bright idea to completely rewrite the app such that it wasn’t even backwards-compatible with older hardware, and everything is now a remote calls. Changing volume? Phone —> Router —> WAN —> Cloud —> Router —> Speakers. Just… WHY. This failed so spectacularly that the CEO responsible stepped down / was forced out, and the new one claims that fixing the app is his top priority. We’ll see.

replies(2): >>43972959 #>>43974448 #
58. aloha2436 ◴[] No.43972862{4}[source]
I'm on a four year old mid-tier laptop and opening VS Code takes maybe five seconds. Opening IDEA takes five seconds. Opening twitter on an empty cache takes perhaps four seconds and I believe I am a long way from their servers.

On my work machine slack takes five seconds, IDEA is pretty close to instant, the corporate VPN starts nearly instantly (although the Okta process seems unnecessarily slow I'll admit), and most of the sites I use day-to-day (after Okta) are essentially instant to load.

I would say that your experiences are not universal, although snappiness was the reason I moved to apple silicon macs in the first place. Perhaps Intel is to blame.

replies(5): >>43973037 #>>43974066 #>>43974668 #>>43975101 #>>43975345 #
59. xnorswap ◴[] No.43972865{5}[source]
You're probably right, I'm likely massively underestimating the time, it's long enough to be noticable, but not so long that it feels instantly frustrating the first time, it just contributes to an overall sluggishness.
60. lenkite ◴[] No.43972867{3}[source]
2021 MacBook and 2020 iPhone are not "old". Still using 2018 iPhone. Used a 2021 Macbook until a month ago.
61. ricardo81 ◴[] No.43972888[source]
>Personally I think the 1000Xers kinda ruined things for the rest of us.

Reminds me of when NodeJS came out that bridged client and server side coding. And apparently their repos can be a bit of a security nightmare nowadays- so the minimalist languages with limited codebase do have their pros.

62. sgarland ◴[] No.43972912{3}[source]
IMO, the prime offender is simply not understanding fundamentals. From simple things like “a network call is orders of magnitude slower than a local disk, which is orders of magnitude slower than RAM…” (and moreover, not understanding that EBS et al. are networked disks, albeit highly specialized and optimized), or doing insertions to a DB by looping over a list and writing each row individually.

I have struggled against this long enough that I don’t think there is an easy fix. My current company is the first I’ve been at that is taking it seriously, and that’s only because we had a spate of SEV0s. It’s still not easy, because a. I and the other technically-minded people have to find the problems, then figure out how to explain them b. At its heart, it’s a culture war. Properly normalizing your data model is harder than chucking everything into JSON, even if the former will save you headaches months down the road. Learning how to profile code (and fix the problems) may not be exactly hard, but it’s certainly harder than just adding more pods to your deployment.

63. monkeyelite ◴[] No.43972915[source]
> If dynamic array bounds checking cost 5% (narrator: it is far less than that)

It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.

This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.

The vast majority of cases it doesn’t matter at all - much less than 5%. I think safe/unsafe or general/performance scopes are a good way to handle this.

replies(3): >>43973436 #>>43975046 #>>43976715 #
64. api ◴[] No.43972937{3}[source]
They're comparing these applications to older applications that loaded instantly on much slower computers.

Both sides are right.

There is a ton of waste and bloat and inefficiency. But there's also a ton of stuff that genuinely does demand more memory and CPU. An incomplete list:

- Higher DPI displays use intrinsically more memory and CPU to paint and rasterize. My monitor's pixel array uses 4-6X more memory than my late 90s PC had in the entire machine.

- Better font rendering is the same.

- Today's UIs support Unicode, right to left text, accessibility features, different themes (dark/light at a minimum), dynamic scaling, animations, etc. A modern GUI engine is similar in difficulty to a modern game engine.

- Encryption everywhere means that protocols are no longer just opening a TCP connection but require negotiation of state and running ciphers.

- The Web is an incredibly rich presentation platform that comes with the overhead of an incredibly rich presentation platform. It's like PostScript meets a GUI library meets a small OS meets a document markup layer meets...

- The data sets we deal with today are often a lot larger.

- Some of what we've had to do to get 1000X performance itself demands more overhead: multiple cores, multiple threads, 64 bit addressing, sophisticated MMUs, multiple levels of cache, and memory layouts optimized for performance over compactness. Those older machines were single threaded machines with much more minimal OSes, memory managers, etc.

- More memory means more data structure overhead to manage that memory.

- Larger disks also demand larger structures to manage them, and modern filesystems have all kinds of useful features like journaling and snapshots that also add overhead.

... and so on.

replies(1): >>43974620 #
65. mjburgess ◴[] No.43972959{5}[source]
Presumably they wanted the telemetry. It's not clear that this was a dev-initiated switch.

Perhaps we can blame the 'statistical monetization' policies of adtech and then AI for all this -- i'm not entirely sold on developers.

What, after all, is the difference between an `/etc/hosts` set of loop'd records vs. an ISP's dns -- as far as the software goes?

replies(2): >>43973026 #>>43974584 #
66. yetihehe ◴[] No.43972991{4}[source]
> are all applications I use every day that take 20+ seconds to launch.

I suddenly remembered some old Corel Draw version circa year 2005, which had loading screen enumerating random things it loaded and was computing until a final message "Less than a minute now...". It most often indeed lasted less than a minute to show interface :).

67. sgarland ◴[] No.43973012{6}[source]
Yep. I’m a DBRE, and can confirm, it’s almost always the DB, with the explicit caveat that it’s also rarely the fault of the DB itself, but rather the fault of poor schema and query design.

Queries I can sometimes rewrite, and there’s nothing more satisfying than handing a team a 99% speed-up with a couple of lines of SQL. Sometimes I can’t, and it’s both painful and frustrating to explain that the reason the dead-simple single-table SELECT is slow is because they have accumulated billions of rows that are all bloated with JSON and low-cardinality strings, and short of at a minimum table partitioning (with concomitant query rewrites to include the partition key), there is nothing anyone can do. This has happened on giant instances, where I know the entire working set they’re dealing with is in memory. Computers are fast, but there is a limit.

The other way the DB gets blamed is row lock contention. That’s almost always due to someone opening a transaction (e.g. SELECT… FOR UPDATE) and then holding it needlessly while doing other stuff, but sometimes it’s due to the dev not being aware of the DB’s locking quirks, like MySQL’s use of gap locks if you don’t include a UNIQUE column as a search predicate. Read docs, people!

replies(1): >>43973274 #
68. sgarland ◴[] No.43973026{6}[source]
You’re right, and I shouldn’t necessarily blame devs for the idea, though I do blame their CTO for not standing up to it if nothing else.

Though it’s also unclear to me in this particular case why they couldn’t collect commands being issued, and then batch-send them hourly, daily, etc. instead of having each one route through the cloud.

69. Cthulhu_ ◴[] No.43973037{5}[source]
VS Code defers a lot of tasks to the background at least. This is a bit more visible in intellij; you seem to measure how long it takes to show its window, but how long does it take for it to warm up and finish indexing / loading everything, or before it actually becomes responsive?

Anyway, five seconds is long for a text editor; 10, 15 years ago, sublime text loaded and opened up a file in <1 second, and it still does today. Vim and co are instant.

Also keep in mind that desktop computers haven't gotten significantly faster for tasks like opening applications in the past years; they're more efficient (especially the M line CPUs) and have more hardware for specialist workloads like what they call AI nowadays, but not much innovation in application loading.

You use a lot of words like "pretty close to", "nearly", "essentially", but 10, 20 years ago they WERE instant; applications from 10, 20 years ago should be so much faster today than they were on hardware from back then.

I wish the big desktop app builders would invest in native applications. I understand why they go for web technology (it's the crossplatform GUI technology that Java and co promised and offers the most advanced styling of anything anywhere ever), but I wish they invested in it to bring it up to date.

replies(3): >>43973596 #>>43975270 #>>43978628 #
70. afavour ◴[] No.43973040{3}[source]
I think it’s a little more nuanced than the broad takes make it seem.

One of the biggest performance issues I witness is that everyone assumes a super fast, always on WiFi/5G connection. Very little is cached locally on device so even if I want to do a very simple search through my email inbox I have to wait on network latency. Sometimes that’s great, often it really isn’t.

Same goes for many SPA web apps. It’s not that my phone can’t process the JS (even though there’s way too much of it), it’s poor caching strategies that mean I’m downloading and processing >1MB of JS way more often than I should be. Even on a super fast connection that delay is noticeable.

71. jayd16 ◴[] No.43973055{4}[source]
You're comparing the art(!) of two different games, that targeted two different sets of hardware while using the ideal hardware for one and not the other. Kind of a terrible example.
replies(1): >>43973172 #
72. Cthulhu_ ◴[] No.43973061{4}[source]
It's strange, it visibly loading the buttons is indicative they use async technology that can use multithreaded CPUs effectively... but it's slower than the old synchronous UI stuff.

I'm sure it's significantly more expensive to render than Windows 3.11 - XP were - rounded corners and scalable vector graphics instead of bitmaps or whatever - but surely not that much? And the resulting graphics can be cached.

replies(3): >>43974741 #>>43974846 #>>43981230 #
73. jayd16 ◴[] No.43973065{3}[source]
A lot of nostalgia is at work here. Modern tech is amazing. If the old tools were actually better people would actually use them. Its not like you can't get them to work.
replies(1): >>43982401 #
74. 3036e4 ◴[] No.43973086{3}[source]
The sad thing is that even running DOS software in DOSBox (or in QEMU+FreeDOS), or Amiga software in UAE, is much faster than any native software I have run in many years on any modern systems. They also use more reasonable amounts of storage/RAM.

Animations is part of it of course. A lot of old software just updates the screen immediately, like in a single frame, instead of adding frustrating artificial delays to every interaction. Disabling animations in Android (an accessibility setting) makes it feel a lot faster for instance, but it does not magically fix all apps unfortunately.

75. Cthulhu_ ◴[] No.43973093{4}[source]
> All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."

Would we? Really? I don't think giving up performance needs to be a compromise for the number of features or speed of delivering them.

replies(1): >>43973231 #
76. CyberDildonics ◴[] No.43973104[source]
Clock speeds are 2000x higher than the 80s.

IPC could be 80x higher when taking into account SIMD and then you have to multiply by each core. Mainstream CPUs are more like 1 to 2 million times faster than what was there in the 80s.

You can get full refurbished office computers that are still in the million times faster range for a few hundred dollars.

The things you are describing don't have much to do with computers being slow and feeling slow, but they are happening anyway.

Scripting languages that are constantly allocating memory to any small operation and pointer chasing ever variable because the type is dynamic is part of the problem, then you have people writing extremely inefficient programs in an already terrible environment.

Most programs are written now in however way the person writing them wants to work, not how someone using it wishes they were written.

Most people have actually no concept of optimization or what runs faster than something else. The vast majority of programs are written by someone who gets it to work and thinks "this is how fast this program runs".

The idea that the same software can run faster is a niche thought process, not even everyone on hacker news thinks about software this way.

77. scotty79 ◴[] No.43973164{5}[source]
> the value in the human is an informed opinion which is hard to program

I don't think I ever used a human for that. They are usually very uninformed about everything that's not their standard operational procedure or some current promotional materials.

replies(1): >>43973219 #
78. flohofwoe ◴[] No.43973172{5}[source]
> You're comparing the art(!)

The art direction, modelling and animation work is mostly fine, the worse look results from the lack of dynamic lighting and ambient occlusion in the Oblivion Remaster when switching Lumen (UE5's realtime global illumination feature) to the lowest setting, this results in completely flat lighting for the vegetation but is needed to get an acceptable base frame rate (it doesn't solve the random stuttering though).

Basically, the best art will always look bad without good lighting (and even baked or faked ambient lighting like in Skyrim looks better than no ambient lighting at all.

Digital Foundry has an excellent video about the issues:

https://www.youtube.com/watch?v=p0rCA1vpgSw

TL;DR: the 'ideal hardware' for the Oblivion Remaster doesn't exist, even if you get the best gaming rig money can buy.

replies(2): >>43973450 #>>43973742 #
79. bluGill ◴[] No.43973219{6}[source]
20 years ago when I was at McDonalds there would be several customers per shift (so many 1 in 500?) who didn't know what they wanted and asked for a recommendation. Since I worked there I ate there often enough to know if the special was something I liked or not.
replies(1): >>43973439 #
80. mbac32768 ◴[] No.43973220{3}[source]
In Carmack's Lex Fridman interview he says he knows C++ devs who still insist on using some ancient version of MSVC because it's *so fast* compared to the latest, on the latest hardware.
81. sorcerer-mar ◴[] No.43973231{5}[source]
People make higher-order abstractions for funzies?
82. Zak ◴[] No.43973274{7}[source]
It seems to me most developers don't want to learn much about the database and would prefer to hide it behind the abstractions used by their language of choice. I can relate to a degree; I was particularly put off by SQL's syntax (and still dislike it), but eventually came to see the value of leaning into the database's capabilities.
83. OtherShrezzing ◴[] No.43973431{3}[source]
Spotify takes 7 seconds from clicking on its icon to playing a song on a 2024 top-of-the-range MacBook Pro. Navigating through albums saved on your computer can take several seconds. Double clicking on a song creates a 1/4sec pause.

This is absolutely remarkable inefficiency considering the application's core functionality (media players) was perfected a quarter century ago.

replies(1): >>43975390 #
84. miloignis ◴[] No.43973436[source]
It's not that simple either - normally, if you're doing some loops over a large array of pixels, say, to perform some operation to them, there will only be a couple of bounds checks before the loop starts, checking the starting and ending conditions of the loops, not re-doing the bounds check for every pixel.

So very rarely should it be anything like 3-4x the cost, though some complex indexing could cause it to happen, I suppose. I agree scopes are a decent way to handle it!

replies(1): >>43976542 #
85. scotty79 ◴[] No.43973439{7}[source]
Bless your souls. I'm not saying it doesn't happen. I just personally had only bad experiences so I actively avoid human interactive input in my commercial activity.
replies(1): >>43984456 #
86. KronisLV ◴[] No.43973450{6}[source]
> …when switching Lumen (UE5's realtime global illumination feature) to the lowest setting, this results in completely flat lighting for the vegetation but is needed to get an acceptable base frame rate (it doesn't solve the random stuttering though).

This also happens to many other UE5 games like S.T.A.L.K.E.R. 2 where they try to push the graphics envelope with expensive techniques and most people without expensive hardware have to turn the settings way down (even use things like upscaling and framegen which further makes the experience a bit worse, at least when the starting point is very bad and you have to use them as a crutch), often making these modern games look worse than something a decade old.

Whatever UE5 is doing (or rather, how so many developers choose to use it) is a mistake now and might be less of a mistake in 5-10 years when the hardware advances further and becomes more accessible. Right now it feels like a ploy by the Big GPU to force people to upgrade to overpriced hardware if they want to enjoy any of these games; or rather, sillyness aside, is an attempt by studios to save resources by making the artists spend less time on faking and optimizing effects and detail that can just be brute forced by the engine.

In contrast, most big CryEngine and idTech games run great even on mid range hardware and still look great.

replies(1): >>43975682 #
87. asciimov ◴[] No.43973492{3}[source]
One example is Office. Microsoft is going back to preloading office during Windows Boot so that you don't notice it loading. With the average system spec 25 years ago it made sense to preload office. But today, what is Office doing that it needs to offload its startup to running at boot?
88. CyberDildonics ◴[] No.43973523{4}[source]
we could of course theoretically make everything even faster. It's nowhere near the most optimal use of the available hardware. All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."

Says who? Who are these experienced people that know how to write fast software that think it is such a huge sacrifice?

The reality is that people who say things like this don't actually know much about writing fast software because it really isn't that difficult. You just can't grab electron and the lastest javascript react framework craze.

These kinds of myths get perpetuated by people who repeat it without having experienced the side of just writing native software. I think mostly it is people rationalizing not learning C++ and sticking to javascript or python because that's what they learned first.

replies(2): >>43973895 #>>43975467 #
89. maleldil ◴[] No.43973540{4}[source]
You have stories of people running 2021 MacBooks and complaining about performance. Those are not shitty computers.
90. wustus ◴[] No.43973584[source]
And this is JavaScript. And you. are. going. to. LOVE IT!
91. _Algernon_ ◴[] No.43973596{6}[source]
>Anyway, five seconds is long for a text editor; 10, 15 years ago, sublime text loaded and opened up a file in <1 second, and it still does today. Vim and co are instant.

Do any of those do the indexing that cause the slowness? If not it's comparing apples to oranges.

replies(1): >>43974979 #
92. pydry ◴[] No.43973648{6}[source]
slack is shit along all sorts of dimensions (not just speed and bloat) because you're not the customer.
93. yifanl ◴[] No.43973705{3}[source]
The Nintendo Switch on a chipset that was outdated a decade ago can run Tears of the Kingdom. It's not sensible that modern hardware is anything less than instant.
replies(1): >>43981909 #
94. _aavaa_ ◴[] No.43973711{5}[source]
The fact that this was seen as an acceptable design decision both by the creators, and then taken up by the industry is in an of itself a sign of a serious issue.
95. billfor ◴[] No.43973716[source]
I made a vendor run their buggy and slow software on a Sparc 20 against their strenuous complaints to just let them have an Ultra, but when they eventually did optimize their software to run efficiently (on the 20) it helped set the company up for success in the wider market. Optimization should be treated as competitive advantage, perhaps in some cases one of the most important.
replies(1): >>43984314 #
96. jayd16 ◴[] No.43973742{6}[source]
I haven't really played it myself but it sounds like from the video you posted the remasters a bit of an outlier in terms of bad performance. Again it seems like a bad example to pull from.
97. sorcerer-mar ◴[] No.43973895{5}[source]
> These kinds of myths get perpetuated by people who repeat it without having experienced the side of just writing native software. I think mostly it is people rationalizing not learning assembly and sticking to C++ or PERL because that's what they learned first.

Why stop at C++? Is that what you happen to be comfortable with? Couldn't you create even faster software if you went down another level? Why don't you?

replies(2): >>43974060 #>>44001898 #
98. andy12_ ◴[] No.43973897{3}[source]
Online Word (or Microsoft 365, or whatever it is called) regularly took me 2 minutes to load a 120 page document. I'm being very literal here. You could see it load in real time approximately 1 page a second. And it wasn't a network issue, mind you. It was just that slow.

Worse, the document strained my laptop so much as I used it, I regularly had to reload the web-page.

99. RajT88 ◴[] No.43973911{4}[source]
This one drives me nuts.

I have to stay connected to VPN to work, and if I see VPN is not connected I click to reconnect.

If the VPN button hasn't loaded you end up turning on Airplane mode. Ouch.

100. buzzerbetrayed ◴[] No.43973999{4}[source]
Yep. I suspect GP has just gotten used to this and it is the new “snappy” to them.

I see this all the time with people who have old computers.

“My computer is really fast. I have no need to upgrade”

I press cmd+tab and watch it take 5 seconds to switch to the next window.

That’s a real life interaction I had with my parents in the past month. People just don’t know what they’re missing out on if they aren’t using it daily.

replies(1): >>43974819 #
101. CyberDildonics ◴[] No.43974060{6}[source]
Couldn't you create even faster software if you went down another level? Why don't you?

No and if you understood what makes software fast you would know that. Most software is allocating memory inside hot loops and taking that out is extremely easy and can easily be a 7x speedup. Looping through contiguous memory instead of chasing pointers through heap allocated variables is another 25x - 100x speed improvement at least. This is all after switching from a scripting language, which is about a 100x in itself if the language is python.

It isn't about the instructions it is about memory allocation and prefetching.

replies(1): >>43974682 #
102. ◴[] No.43974066{5}[source]
103. joaohaas ◴[] No.43974192{3}[source]
Try forcefully closing VSCode and your browser, and see how long it takes to open them again. The same is true for most complex webpages/'webapps' (Slack, Discord, etc).

A lot of other native Mac stuff is also less than ideal. Terminal keeps getting stuck all the time, Mail app can take a while to render HTML emails, Xcode is Xcode, and so on.

104. bjourne ◴[] No.43974413{3}[source]
Apple unlike the other Silicon Valley giants has figured out that latency >>> throughput. Minimizing latency is much more important for making a program "feel" fast than maximizing latency. Some of the apps I interact with daily are Slack, Teams (ugh), Gmail, and YouTube and they are all slow as dogshit.
105. AtlasBarfed ◴[] No.43974422[source]
I think on year 2001 GHz CPU should be a performance benchmark that every piece of basic non-high performance software should execute acceptably on.

This is kind of been a disappointment to me of AI when I've tried it. This has kind of been a disappointment to me of AI when I've tried it. Llm should be able to Port things. It should be able to rewrite things with the same interface. It should be able to translate from inefficient languages to more efficient ones.

It should even be able to optimize existing code bases automatically, or at least diagnose or point out poor algorithms, cache optimization, etc.

Heck I remember powerbuilder in the mid 90s running pretty well on 200 mhz CPUs. It doesn't even really interpreted stuff. It's just amazing how slow stuff is. Do rounded corners and CSS really consume that much CPU power?

My limited experience was trying to take the unix sed source code and have AI port it into a jvm language, and it could do the most basic operations, but utterly failed at even the intermediate sed capabilities. And then optimize? Nope

Of course there's no desire for something like that. Which really shows what the purpose of all this is. It's to kill jobs. It's not to make better software. And it means AI is going to produce a flood of bad software. Really bad software.

replies(1): >>43979673 #
106. LocalPCGuy ◴[] No.43974448{5}[source]
We (probably) can guess the why - tracking and data opportunities which companies can eventually sell or utilize for profit is some way.
107. crubier ◴[] No.43974559{4}[source]
HOW does Slack take 20s to load for you? My huge corporate Slack takes 2.5s to cold load.

I'm so dumbfounded. Maybe non-MacOS, non-Apple silicon stuff is complete crap at that point? Maybe the complete dominance of Apple performance is understated?

replies(3): >>43974998 #>>43975421 #>>43975873 #
108. skydhash ◴[] No.43974584{6}[source]
> Presumably they wanted the telemetry

Why not log them to a file and cron a script to upload the data? Even if the feature request is nonsensical, you can architect a solution that respect the platform's constraints. It's kinda like when people drag in React and Next.js just to have a static website.

replies(1): >>43977729 #
109. skydhash ◴[] No.43974620{4}[source]
Then you install Linux and get all that without the mess that is Win11. Inefficient software is inefficient software.
110. vel0city ◴[] No.43974668{5}[source]
It's probably more so that any corporate Windows box has dozens of extra security and metrics agents interrupting and blocking every network request and file open and OS syscall installed by IT teams while the Macs have some very basic MDM profile applied.
replies(1): >>43983833 #
111. sorcerer-mar ◴[] No.43974682{7}[source]
Sorry but it is absolutely the case that there are optimizations available to someone working in assembly that are not available to someone working in C++.

You are probably a lazy or inexperienced engineer if you choose to work in C++.

In fact, there are optimizations available at the silicon level that are not available in assembly.

You are probably a lazy or inexperienced engineer if you choose to work in assembly.

replies(1): >>43978276 #
112. vel0city ◴[] No.43974741{5}[source]
Windows 3.1 wasn't checking WiFi, Bluetooth, energy saving profile, night light setting, audio devices, current power status and battery level, audio devices, and more when clicking the non-existent icon on the non-existent taskbar. Windows XP didn't have this quick setting area at all. But I do recall having the volume slider take a second to render on XP from time to time, and that was only rendering a slider.

And FWIW this stuff is then cached. I hadn't clicked that setting area in a while (maybe the first time this boot?) and did get a brief gray box that then a second later populated with all the buttons and settings. Now every time I click it again it appears instantly.

replies(2): >>43976317 #>>43981060 #
113. vel0city ◴[] No.43974819{5}[source]
Yeah, I play around with retro computers all the time. Even with IO devices that are unthinkably performant compared to storage hardware actually common at the time these machines are often dog slow. Just rendering JPEGs can be really slow.

Maybe if you're in a purely text console doing purely text things 100% in memory it can feel snappy. But the moment you do anything graphical or start working on large datasets its so incredibly slow.

I still remember trying to do photo editing on a Pentium II with a massive 64MB of RAM. Or trying to get decent resolutions scans off a scanner with a Pentium III and 128MB of RAM.

replies(2): >>43975298 #>>43977784 #
114. jandrese ◴[] No.43974846{5}[source]
Honestly it behaves like the interface is some Electron app that has to load the visual elements from a little internal webserver. That would be a very silly way to build an OS UI though, so I don't know what Microsoft is doing.
115. bitmasher9 ◴[] No.43974876[source]
The major slowdown of modern applications is network calls. Spend 50-500ms a pop for a few kilos of data. Many modern applications will spin up a half dozen blocking network calls casually.
116. vel0city ◴[] No.43974933{4}[source]
I don't get any kind of spinner on Outlook opening emails. Especially emails which are pure text or only lightly stylized open instantly. Even emails with calendar invites load really fast, I don't see any kind of spinner graphic at all.

Running latest Outlook on Windows 11, currently >1k emails in my Inbox folder, on an 11th gen i5, while also on a Teams call a ton of other things active on my machine.

This is also a machine with a lot of corporate security tools sapping a lot of cycles.

replies(1): >>43975844 #
117. maccard ◴[] No.43974979{7}[source]
Riders startup time isn’t including indexing. Indexing my entire project takes minutes but it does it in the background.
118. bflesch ◴[] No.43974998{5}[source]
Most likely the engineers at many startups only use apple computers themselves and therefore only optimize performance for those systems. It's a shame but IMO result of their incompetence and not result of some magic apple performance gains.
119. jandrese ◴[] No.43975037[source]
2005 was Pentium 4 era.

For comparison: https://www.cpubenchmark.net/compare/1075vs5852/Intel-Pentiu...

That's about a 168x difference. That was from before Moores law started petering out.

For only a 5x speed difference you need to go back to the 4th or 5th generation Intel Core processors from about 10 years ago.

It is important to note that the speed figure above is computed by adding all of the cores together and that single core performance has not increased nearly as much. A lot of that difference is simply from comparing a single core processor with one that has 20 cores. Single core performance is only about 8 times faster than that ancient Pentium 4.

120. timbit42 ◴[] No.43975046[source]
Your argument is exactly why we ended up with the abominations of C and C++ instead of the safety of Pascal, Modula-2, Ada, Oberon, etc. Programmers at the time didn't realize how little impact safety features like bounds checking have. The bounds only need to be checked once for a for loop, not on each iteration.
replies(1): >>43976530 #
121. xboxnolifes ◴[] No.43975093{4}[source]
That sounds like a corporate anti-virus slowing everything down to me. vscode takes a few seconds to launch for me from within WSL2, with extensions. IntelliJ on a large project takes a while I'll give you that, but just intelliJ takes only a few seconds to launch.
replies(1): >>43975122 #
122. maccard ◴[] No.43975101{5}[source]
This is my third high end workstation computer in the last 5 years and my experience has been roughly consistent with.

My corporate vpn app is a disaster on so many levels, it’s an internally developed app as opposed to Okta or anything like that.

I would likewise say that your experience is not universal, and that in many circumstances the situation is much worse. My wife is running an i5 laptop from 2020 and her work intranet is a 60 second load time. Outlook startup and sync are measured in minutes including mailbox fetching. You can say this is all not the app developers fault, but the crunch that’s installed on her machine is slowing things down by 5 or 10x and that slowdown wouldn’t be a big deal if the apps had reasonable load times in the first place.

123. maccard ◴[] No.43975122{5}[source]
Vscode is actually 10 seconds, you’re right.

I have no corp antivirus or MDM on this machine, just windows 11 and windows defender.

124. timbit42 ◴[] No.43975158[source]
You only need to bounds check once before a for loop starts, not every iteration.
replies(2): >>43976236 #>>43988812 #
125. titzer ◴[] No.43975205{3}[source]
> However these days the majority of the bad attacks are social

You're going to have to cite a source for that.

Bounds checking is one mechanism that addresses memory safety vulnerabilities. According to MSFT and CISA[1], nearly 70% of CVEs are due to memory safety problems.

You're saying that we shouldn't solve one (very large) part of the (very large) problem because there are other parts of the problem that the solution wouldn't address?

[1] https://www.cisa.gov/news-events/news/urgent-need-memory-saf...

replies(2): >>43982703 #>>43983400 #
126. maccard ◴[] No.43975226{4}[source]
For all the people who are doubting that applications are slow and that it must just be me - here [0] is a debugger that someone has built from the ground up that compiles, launches, attaches a debugger and hits a breakpoint in the same length of time that visual studio displays the splash screen for.

[0] https://x.com/ryanjfleury/status/1747756219404779845

127. rtkwe ◴[] No.43975270{6}[source]
Sublime Text isn't an IDE though so comparing it to VS Code is comparing grapes and apples. VS Code is doing a lot more.
replies(2): >>43975459 #>>43982775 #
128. titzer ◴[] No.43975298{6}[source]
64MB is about the size of (a big) L3 cache. Today's L3 caches have a latency of 3-12ns and throughput measured in hundreds of gigabytes per second. And yet we can't manage to get responsive UIs because of tons of crud.
replies(1): >>43975492 #
129. thewebguyd ◴[] No.43975345{5}[source]
5 seconds is a lot for a machine with an M4 Pro, and tons of RAM and a very fast SSD.

There's native apps just as, if not more, complicated than VSCode that open faster.

The real problem is electron. There's still good, performant native software out there. We've just settled on shipping a web browser with every app instead.

replies(1): >>43975416 #
130. conradfr ◴[] No.43975364{4}[source]
All those things takes 4 seconds to launch or load on my M1. Not great, not bad.
replies(1): >>43978597 #
131. everdrive ◴[] No.43975390{4}[source]
And on RhythmBox, on a 2017 laptop it works instantaneously. These big monetized apps were a huge mistake.
replies(1): >>43975488 #
132. maccard ◴[] No.43975416{6}[source]
There is snappy electron software out there too, to be fair. If you create a skeleton electron app it loads just fine. A perceptible delay but still quick.

The problem is when you load it and then react and all its friends, and design your software for everything to be asynchronous and develop it on a 0 latency connection over localhost with a team of 70 people where nobody is holistically considering “how long does it take from clicking the button to doing the thing I want it to do”

133. bloomca ◴[] No.43975421{5}[source]
I use Windows alongside my Mac Mini, and I would say they perform pretty similarly (but M-chip is definitely more power efficient).

I don't use Slack, but I don't think anything takes 20 seconds for me. Maybe XCode, but I don't use it often enough to be annoyed.

replies(1): >>43976352 #
134. BobaFloutist ◴[] No.43975444{4}[source]
When I bought my current laptop, it was the cheapest one Costco had with 8 gigs of memory, which was at the time plenty for all but specialized uses. I've since upgraded it to 16, which feels like the current standard for that.

But...why? Why on earth do I need 16 gigs of memory for web browsing and basic application use? I'm not even playing games on this thing. But there was an immediate, massive spike in performance when I upgraded the memory. It's bizarre.

replies(1): >>43976527 #
135. maccard ◴[] No.43975459{7}[source]
I disagree. Vs code uses plugins for all its heavy lifting. Even a minimal plugin setup is substantially slower to load than sublime is, which can also have an LSP plugin.
136. BobaFloutist ◴[] No.43975467{5}[source]
I mean do you think JavaScript and Python aren't easier than C++? Then why do they exist?
replies(1): >>43978308 #
137. thewebguyd ◴[] No.43975470{5}[source]
> Yep. Developers make programs run well enough on the hardware sitting on our desks. So long as we’re well paid (and have decent computers ourselves), we have no idea what the average computing experience is for people still running 10yo computers which were slow even for the day. And that keeps the treadmill going. We make everyone need to upgrade every few years.

Same thing happens with UI & Website design. When the designers and front-end devs all have top-spec MacBooks, with 4k+ displays, they design to look good in that environment.

Then you ship to the rest of the world which are still for the most part on 16:9 1920x1080 (or god forbid, 1366x768), low spec windows laptops and the UI looks like shit and is borderline unstable.

Now I don't necessarily think things should be designed for the lowest common denominator, but at the very least we should be taking into consideration that the majority of users probably don't have super high end machines or displays. Even today you can buy a brand new "budget" windows laptop that'll come with 8GB of RAM, and a tiny 1920x1080 display, with poor color reproduction and crazy low brightness - and that's what the majority of people are using, if they are using a computer at all and not a phone or tablet.

138. thewebguyd ◴[] No.43975488{5}[source]
> These big monetized apps were a huge mistake.

It's electron. Electron was a mistake.

139. vel0city ◴[] No.43975492{7}[source]
My modern machine running a modern OS is still way snappier while actually loading the machine and doing stuff. Sure, if I'm directly on a tty and just running vim on a small file its super fast. The same on my modern machine. Try doing a few things at once or handle some large dataset and see how well it goes.

My older computers would completely lock up when given a large task to do, often for many seconds. Scanning an image would take over the whole machine for like a minute per page! Applying a filter to an image would lock up the machine for several seconds even for a much smaller image a much simpler filter. The computer cannot even play mp3's and have a responsive word processor, if you really want to listen to music while writing a paper you better have it pass through the audio from a CD, much less think about streaming it from some remote location and have a whole encrypted TCP stream and decompression.

These days I can have lots of large tasks running at the same time and still have more responsiveness.

I have fun playing around with retro hardware and old applications, but "fast" and "responsive" are not adjectives I'd use to describe them.

replies(1): >>43984268 #
140. slowmovintarget ◴[] No.43975552{4}[source]
Search is being replaced by LLM chat. Agent workflows are going to get us to a place where people can rally software to their own purposes. At that point, they don't have to interact with the web front end, they can interact with their own personal front-end that is able to navigate your backend.

Today a website is easier. But just like there's a very large percentage of people doing a great many things from their phone instead of tying themselves to a full-blown personal computer, there will be an increasing number of people who send their agents off to get things done. In that scenario, the user interface is further up the stack than a browser, if there's a browser as typically understood in the stack at all.

141. flohofwoe ◴[] No.43975682{7}[source]
It's like (usable) realtime global illumination is the fusion power of rendering, always just 10 years away ;)

I remember that UE4 also hyped a realtime GI solution which then was hardly used in realworld games because it had a too big performance hit.

142. KapKap66 ◴[] No.43975741{3}[source]
There's a problem when people who aren't very sensitive to latency and try and track it, and that is that their perception of what "instant" actually means is wrong. For them, instant is like, one second. For someone who cares about latency, instant is less than 10 milliseconds, or whatever threshold makes the difference between input and result imperceptible. People have the same problem judging video game framerates because they don't compare them back to back very often (there are perceptual differences between framerates of 30, 60, 120, 300, and 500, at the minimum, even on displays incapable of refreshing at these higher speeds), but you'll often hear people say that 60 fps is "silky smooth," which is not true whatsoever lol.

If you haven't compared high and low latency directly next to each other then there are good odds that you don't know what it looks like. There was a twitter video from awhile ago that did a good job showing it off that's one of the replies to the OP. It's here: https://x.com/jmmv/status/1671670996921896960

Sorry if I'm too presumptuous, however; you might be completely correct and instant is instant in your case.

replies(3): >>43975762 #>>43975835 #>>43976654 #
143. JoeAltmaier ◴[] No.43975762{4}[source]
I fear that such comments are similar to the old 'a monster cable makes my digital audio sound more mellow!'

The eye percieves at about 10 hz. That's 100ms per capture. All the rest, I'd have to see a study that shows how any higher framerate can possibly be perceived or useful.

replies(4): >>43976063 #>>43977562 #>>43981646 #>>43985975 #
144. bpshaver ◴[] No.43975835{4}[source]
Sure, but there's not limit to what people can decide to care about. There will always be people who want more speed and less latency, but the question is: are they right to do so?

I'm with the person you're responding. I use the regular suite of applications and websites on my 2021 M1 Macbook. Things seem to load just fine.

145. xnorswap ◴[] No.43975844{5}[source]
I guess I shall screen record it, this is new-ish windows 11 laptop.

( This might also be a "new outlook" vs "out outlook" thing? )

replies(1): >>43975945 #
146. pona-a ◴[] No.43975855{3}[source]
Did people make this exchange or did __the market__? I feel like we're assigning a lot of intention to a self-accelerating process.

You add a new layer of indirection to fix that one problem on the previous layer, and repeat it ad infinitum until everyone is complaining about having too many layers of indirection, yet nobody can avoid interacting with them, so the only short-term solution is a yet another abstraction.

147. mike_hearn ◴[] No.43975873{5}[source]
Yes it is and the difference isn't understated, I think everyone knows by now that Apple has run away with laptop/desktop performance. They're just leagues ahead.

It's a mix of better CPUs, better OS design (e.g. much less need for aggressive virus scanners), a faster filesystem, less corporate meddling, high end SSDs by default... a lot of things.

replies(1): >>43982677 #
148. mike_hearn ◴[] No.43975898{4}[source]
Windows 11 shell partly uses React Native in the start button flyout. It's not a heavily optimized codebase.
replies(2): >>43976282 #>>43981074 #
149. vel0city ◴[] No.43975945{6}[source]
I am using New Outlook.

I don't doubt it's happening to you, but I've never experienced it. And I'm not exactly using bleeding edge hardware here. A several year old i5 and a Ryzen 3 3200U (a cheap 2019 processor in a cheap Walmart laptop).

Maybe your IT team has something scanning every email on open. I don't know what to tell you, but it's not the experience out of the box on any machine I've used.

150. mike_hearn ◴[] No.43975990{4}[source]
Some of this is due to the adoption of React. GUI optimization techniques that used to be common are hard to pull off in the React paradigm. For instance, pre-rendering parts of the UI that are invisible doesn't mesh well with the React model in which the UI tree is actually being built or destroyed in response to user interactions and in which data gets loaded in response to that, etc. The "everything is functional" paradigm is popular for various legitimate reasons, although React isn't really functional. But what people often forget is that functional languages have a reputation for being slow...
151. zahlman ◴[] No.43975999{3}[source]
How long did your computer take to start up, from power off (and no hibernation, although that presumably wasn't a thing yet), the first time you got to use a computer?

How long did it take the last time you had to use an HDD rather than SSD for your primary drive?

How long did it take the first time you got to use an SSD?

How long does it take today?

Did literally anything other than the drive technology ever make a significant difference in that, in the last 40 years?

> Almost everything loads instantly on my 2021 MacBook

Instantly? Your applications don't have splash screens? I think you've probably just gotten used to however long it does take.

> 5 year old mobile CPUs load modern SPA web apps with no problems.

"An iPhone 11, which has 4GB of RAM (32x what the first-gen model had), can run the operating system and display a current-day webpage that does a few useful things with JavaScript".

This should sound like clearing a very low bar, but it doesn't seem to.

152. zahlman ◴[] No.43976063{5}[source]
>The eye percieves at about 10 hz. That's 100ms per capture. All the rest, I'd have to see a study that shows how any higher framerate can possibly be perceived or useful.

It takes effectively no effort to conduct such a study yourself. Just try re-encoding a video at different frame rates up to your monitor refresh rate. Or try looking at a monitor that has a higher refresh rate than the one you normally use.

153. mike_hearn ◴[] No.43976066{3}[source]
Use of underpowered databases and abstractions that don't eliminate round-trips is a big one. The hardware is fast but apps take seconds to load because on the backend there's a lot of round-trips to the DB and back, and the query mix is unoptimized because there are no DBAs anymore.

It's the sort of thing that can be handled via better libraries, if people use them. Instead of Hibernate use a mapper like Micronaut Data. Turn on roundtrip diagnostics in your JDBC driver, look for places where they can be eliminated by using stored procedures. Have someone whose job is to look out for slow queries and optimize them, or pay for a commercial DB that can do that by itself. Also: use a database that lets you pipeline queries on a connection and receive the results asynchronously, along with server languages that make it easy to exploit that for additional latency wins.

154. TiredOfLife ◴[] No.43976159[source]
And text that is not a pixely or blurry mess. And Unicode.
replies(1): >>43976412 #
155. conductr ◴[] No.43976220{4}[source]
IMO they just don't think of "initial launch speed" as a meaningful performance stat to base their entire tech stack upon. Most of these applications and even websites, once opened, are going to be used for several hours/days/weeks before being closed by most of their users
156. card_zero ◴[] No.43976236{3}[source]
If they're all being inserted contiguously.

Anyway that's a form of saying "I know by reasoning that none of these will be outside the bounds, so let's not check".

157. stronglikedan ◴[] No.43976270{3}[source]
Mine open instantly, as long as I only have one open at a time. The power users on HN likely encounter a lot of slow loading apps, like I do.
158. tjader ◴[] No.43976282{5}[source]
That's the point. It's so bloated that an entirely local operation that should be instantaneous takes over 1 second.
159. tjader ◴[] No.43976317{6}[source]
But is this cache trustworthy or will it eventually lead you to click in the wrong place because the situation changed and now there's a new button making everything change place?

And even if every information takes a bit to figure out, it doesn't excuse taking a second to even draw the UI. If checking bluetooth takes a second, then draw the button immediately but disable interaction and show a loading icon, and when you get the blutooth information update the button, and so on for everything else.

replies(2): >>43976371 #>>43980021 #
160. maccard ◴[] No.43976352{6}[source]
I have an i9 windows machine with 64GB ram and an M1 Mac. I’d say day to day responsiveness the Mac is heads and tails above the windows machine, although getting worse. I’m not sure if the problem is the arm electron apps are getting slower or if my machine is just aging
replies(1): >>43978432 #
161. vel0city ◴[] No.43976371{7}[source]
As someone who routinely hops between WiFi networks, I've never seen a wrong value here.

And OK, we'll draw a tile with all the buttons with greyed out status for that half second and then refresh to show the real status. Did that really make things better, or did it make it worse?

And if we bothered keeping all that in memory, and kept using the CPU cycles to make sure it was actually accurate and up to date on the click six hours later, wouldn't people then complain about how obviously bloated it was? How is this not a constant battle of being unable to appease any critics until we're back at the Win 3.1 state of things with no Bluetooth devices, no WiFi networks, no dynamic changing or audio devices, etc?

And remember, we're comparing this to just rendering a volume slider which still took a similar or worse amount of time and offered far less features.

replies(2): >>43976427 #>>43979016 #
162. mikewarot ◴[] No.43976383[source]
The realization that a string could hold a gigabyte of text might have killed off null terminated strings, and saved us all a f@ckton of grief
163. anthk ◴[] No.43976402{4}[source]
Compare it to qutecom, or any other xmpp client.
164. anthk ◴[] No.43976412{3}[source]
Unicode worked since Plan9. And antialiasing it's from the early 90's.
165. tjader ◴[] No.43976427{8}[source]
> And OK, we'll draw a tile with all the buttons with greyed out status for that half second and then refresh to show the real status. Did that really make things better, or did it make it worse?

Clearly better. Most of the buttons should also work instantly, most of the information should also be available instantly. The button layout is rendered instantly, so I can already figure out where I want to click without having to wait one second even if the button is not enabled yet, and by the time my mouse reaches it it will probably be enabled.

> And remember, we're comparing this to just rendering a volume slider which still took a similar or worse amount of time and offered far less features.

I've never seen the volume slider in Windows 98 take one second to render. Not even the start menu, which is much more complex, and which in Windows 11 often takes a second, and search results also show up after a random amount of time and shuffle the results around a few times, leading to many misclicks.

replies(1): >>43976488 #
166. vel0city ◴[] No.43976488{9}[source]
It doesn't even know if the devices are still attached (as it potentially hasn't tried interfacing them for hours) but should instantly be able to allow input to control them and fully understand their current status. Right. Makes sense.

And if you don't remember the volume slider taking several seconds to render on XP you must be much wealthier than me or have some extremely rose colored glasses. I play around with old hardware all the time and get frustrated with the unresponsiveness of old equipment with period accurate software, and had a lot of decent hardware (to me at least) in the 90s and 00s. I've definitely experienced lots of times of the start menu painting one entry after the other at launch, taking a second to roll out, seeking on disk for that third level menu in 98, etc.

Rose colored glasses, the lot of you. Go use an old 386 for a month. Tell me how much more productive you are after.

167. aspenmayer ◴[] No.43976527{5}[source]
Most cheap laptops these days ship with only one stick of RAM, and thus are only operating in single-channel mode. By adding another memory module, you can operate in dual-channel mode which can increase performance a lot. You can see the difference in performance by running a full memory test in single-channel mode vs multi-channel mode with a program like memtest86 or memtest86+ or others.
168. monkeyelite ◴[] No.43976530{3}[source]
> The bounds only need to be checked once for a for loop, not on each iteration.

This is a theoretical argument. It depends on the compiler being able to see that’s what you’re doing and prove that there is no other mutation.

> abominations of C and C++

Sounds like you don’t understand the design choices that made this languages successful.

replies(1): >>43978367 #
169. monkeyelite ◴[] No.43976542{3}[source]
You’re describing a situation where I - or a very smart compiler can choose when to bounds check or not to make that intelligent realization.
170. Aurornis ◴[] No.43976554{3}[source]
I can never tell if all of these comments are exaggerations to make a point, or if some people really have computers so slow that everything takes 20 seconds to launch (like the other comment claims).

I'm sure some of these people are using 10 year old corporate laptops with heavy corporate anti-virus scanning, leading to slow startup times. However, I think a lot of people are just exaggerating. If it's not instantly open, it's too long for them.

I, too, can get programs like Slack and Visual Studio Code to launch in a couple seconds at most, in contrast to all of these comments claiming 20 second launch times. I also don't quit these programs, so the only time I see that load time is after an update or reboot. Even if every program did take 20 seconds to launch and I rebooted my computer once a week, the net time lost would be measured in a couple of minutes.

replies(1): >>43978546 #
171. Aurornis ◴[] No.43976593{4}[source]
> Slack, teams, vs code, miro, excel, rider/intellij, outlook, photoshop/affinity are all applications I use every day that take 20+ seconds to launch.

> On the website front - Facebook, twitter, Airbnb, Reddit, most news sites, all take 10+ seconds to load or be functional

I just launched IntelliJ (first time since reboot). Took maybe 2 seconds to the projects screen. I clicked a random project and was editing it 2 seconds after that.

I tried Twitter, Reddit, AirBnB, and tried to count the loading time. Twitter was the slowest at about 3 seconds.

I have a 4 year old laptop. If you're seeing 10 second load times for every website and 20 second launch times for every app, you have something else going on. You mentioned corporate VPN, so I suspect you might have some heavy anti-virus or corporate security scanning that's slowing your computer down more than you expect.

replies(1): >>43978436 #
172. Aurornis ◴[] No.43976654{4}[source]
> For someone who cares about latency, instant is less than 10 milliseconds

Click latency of the fastest input devices is about 1ms and with a 120Hz screen you're waiting 8.3ms between frames. If someone is annoyed by 10ms of latency they're going to have a hard time in the real world where everything takes longer than that.

I think the real difference is that 1-3 seconds is completely negligible launch time for an app when you're going to be using it all day or week, so most people do not care. That's effectively instant.

The people who get irrationally angry that their app launch took 3 seconds out of their day instead of being ready to go on the very next frame are just never going to be happy.

replies(1): >>43982197 #
173. Aurornis ◴[] No.43976715[source]
> It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.

Your understanding of how bounds checking works in modern languages and compilers is not up to date. You're not going to find a situation where bounds checking causes an algorithm to take 3-4X longer.

A lot of people are surprised when the bounds checking in Rust is basically negligible, maybe 5% at most. In many cases if you use iterators you might not see a hit at all.

Then again, if you have an image processing algorithm that is literally reading every single pixel one-by-one to perform a 2-instruction operation and calculating bounds check on every access in the year 2025, you're doing a lot of things very wrong.

> This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.

Do you have any examples at all? Or is this just speculation?

replies(1): >>43976765 #
174. monkeyelite ◴[] No.43976765{3}[source]
> Your understanding of how bounds checking works in modern languages and compilers is not up to date.

One I am familiar with is Swift - which does exactly this because it’s a library feature of Array.

Which languages will always be able to determine through function calls, indirect addressing, etc whether it needs to bounds check or not?

And how will I know if it succeeded or whether something silently failed?

> if you have an image processing algorithm that is literally reading every single pixel one-by-one to perform a 2-instruction operation and calculating bounds check on every access in the year 2025, you're doing a lot of things very wrong

I agree. And note this is an example of a scenario you can encounter in other forms.

> Do you have any examples at all? Or is this just speculation?

Yes. Java and python are not competitive for graphics and audio processing.

replies(2): >>43984858 #>>43989886 #
175. hello_computer ◴[] No.43977351[source]
> the 1000Xers kinda ruined things for the rest of us

Robert Barton (of Burroughs 5000 fame) once referred to these people as “high priests of a low cult.”

176. KapKap66 ◴[] No.43977562{5}[source]
Well if you believe that, start up a video game with a framerate limiter and set your game's framerate limit to 10 fps and tell me how much you enjoy the experience. By default your game will likely be running at either 60 fps or 120 fps if you're vertical synced (depends on your monitor's refresh rate). Make sure to switch back and forth between 10 and 60/120 to compare.

Even your average movie captures at 24 hz. Again, very likely you've never actually just compared these things for yourself back to back, as I mentioned originally.

replies(1): >>44009921 #
177. arolihas ◴[] No.43977729{7}[source]
someone out there now has a cool resume line item about doing real time cloud microservices on the edge
178. kristianp ◴[] No.43977784{6}[source]
The newish windows photo viewer in Win 10 is painfully slow and it renders a lower res preview first, but then the photo seems to move when the full resolution is shown. The photo viewer in windows 7 would prerender the next photo so the transition to the next one would be instant. The is for 24 megapixel photos, maybe 4mb jpegs.

So the quality has gone backwards in the process of rewriting the app into the touch friendly style. A lot of core windows apps are like that.

Note that the windows file system is much slower than the linux etx4, I don't know about Mac filesystems.

179. CyberDildonics ◴[] No.43978276{8}[source]
Go ahead and give me examples of what you mean.

I'm talking about speeding software up by 10x-100x by language choice, then 7x with extremely minimal adjustments (allocate memory outside of hot loops), then 25x - 100x with fairly minimal design changes (use vectors, loop through them straight).

I'm also not saying people are lazy, I'm saying they don't know that with something like modern C++ and a little bit of knowledge of how to write fast software MASSIVE speed gains are easy to get.

You are helping make my point here, most programmers don't realize that huge speed gains are low hanging fruit. They aren't difficult, they don't mean anything is contorted or less clear (just the opposite), they just have to stop rationalizing not understanding it.

I say this with knowledge of both sides of the story instead of guessing based on conventional wisdom.

replies(1): >>43978366 #
180. dijit ◴[] No.43978286[source]
Maybe since 1980.

I recently watched a video that can be summarised quite simply as: "Computers today aren't that much faster than the computers of 20 years ago, unless you specifically code for them".

https://www.youtube.com/watch?v=m7PVZixO35c

It's a little bit ham-fisted, as the author was shirking decades of compile optimisations also, and it's not apples to apples as he's comparing desktop class hardware with what is essentially laptop hardware; but it's also interesting to see that a lot of the performance gains really weren't that great actually. he observes a doubling of performance in 15 years! Truth be told most people use laptops now, and truth be told 20 years ago most people used desktops, so it's not totally unfair.

Maybe we've bought a lot into marketing.

181. nitwit005 ◴[] No.43978303[source]
The cost of bounds checking, by itself, is low. The cost of using safe languages generally can be vastly higher.

Garbage collected languages often consume several times as much memory. They aren't immediately freeing memory no longer being used, and generally require more allocations in the first place.

182. CyberDildonics ◴[] No.43978308{6}[source]
Javascript was made in a few weeks so that some sort of programmability could be built in to web pages. Python was made in the 90s as a more modern competition to perl for scripting.

Modern C++ and systems language didn't exist and neither was made with the intention that people would write general purpose interactive programs that leveraged computers 1,000x faster so that software could run 1,000x slower.

183. dijit ◴[] No.43978315{3}[source]
You're a pretty bad sample, that machine you're talking about probably cost >$2,000 new; and if it's an M-series chip; well that was a multi-generational improvement.

I (very recently I might add) used a Razer Blade 18, with i9 13950HX and 64G of DDR5 memory, and it felt awfully slow, not sure how much of that is Windows 11's fault however.

My daily driver is an M2 Macbook Air (or a Threadripper 3970x running linux); but the workers in my office? Dell Latitudes with an i5, 4 real cores and 16G of RAM if they're lucky... and of course, Windows 11.

Don't even ask what my mum uses at home, it cost less than my monthly food bill; and that's pretty normal for people who don't love computers.

184. sorcerer-mar ◴[] No.43978366{9}[source]
So you agree there’s a trade off between developer productivity and optimization (coding in assembly isn’t worth it, but allocating memory outside of hot loops is)

You agree with my original point then?

replies(1): >>43978451 #
185. timbit42 ◴[] No.43978367{4}[source]
I understand the design choices and they're crap. Choosing a programming language shouldn't be a popularity contest.
replies(1): >>43979314 #
186. homebrewer ◴[] No.43978432{7}[source]
It's Windows. I'm on Linux 99% of the time and it's significantly more responsive on hardware from 2014 than Windows is on a high end desktop from 2023. I'm not being dramatic.

(Yes, I've tried all combinations of software to hardware and accounted for all known factors, it's not caused by viruses or antiviruses).

XP was the last really responsive Microsoft OS, it went downhill from then and never recovered.

replies(1): >>43978530 #
187. accrual ◴[] No.43978436{5}[source]
> heavy anti-virus or corporate security scanning that's slowing your computer down more than you expect.

Ugh, I personally witnessed this. I would wait to take my break until I knew the unavoidable, unkillable AV scans had started and would peg my CPU at 100%. I wonder how many human and energy resources are wasted checking for non-existant viruses on corp hardware.

replies(3): >>43978638 #>>43980296 #>>43983788 #
188. CyberDildonics ◴[] No.43978451{10}[source]
Are you seriously replying and avoiding everything we both said? I'll simplify it for you:

Writing dramatically fast software that is 1,000x or even 10,000 times faster than a scripting language takes basically zero effort once you know how to do it and these assembly optimization are a myth that you would have already shown me if you could.

replies(1): >>43978459 #
189. sorcerer-mar ◴[] No.43978459{11}[source]
“Zero effort once you know how to do it” is another way of saying “time and effort.”

Congratulations you’ve discovered the value of abstractions!

I mean, you’re the one who started this off with the insane claim that there’s no tradeoff, then claimed there are no optimizations available below C++ (i.e. C++ is the absolute most optimized code a person can write). Not my fault you stake out indefensible positions.

replies(1): >>43979077 #
190. maccard ◴[] No.43978530{8}[source]
My current machine I upgraded from win10 to win11 and I noticed an across the board overnight regression in everything. I did a clean install so if anything it should have been quicker but boot times, app launch times, compile times all took a nosedive on that update.

I still think there’s a lot of blame to go around for the “kitchen sink” approach to app development where we have entire OS’s that can boot faster than your app can get off a splash screen.

Unfortunately, my users are on windows and work has no Linux vpn client so a switch isn’t happening any time soon.

191. vdqtp3 ◴[] No.43978546{4}[source]
It's not an exaggeration.

I have a 12 core Ryzen 9 with 64GB of RAM, and clicking the emoji reaction button in Signal takes long enough to render the fixed set of emojis that I've begun clicking the empty space where I know the correct emoji will appear.

For years I've been hitting the Windows key, typing the three or four unique characters for the app I want and hitting enter, because the start menu takes too long to appear. As a side note, that no longer works since Microsoft decided that predictability isn't a valuable feature, and the list doesn't filter the same way every time or I get different results depending on how fast I type and hit enter.

Lots of people literally outpace the fastest hardware on the market, and that is insane.

replies(2): >>43982568 #>>43989918 #
192. nsagent ◴[] No.43978579{3}[source]
It really depends on the software. I have the top-of-the-line M4 Max laptop with 128GB of memory. I recently switched from Zotero [1] to using papis [2] at the command line.

Zotero would take 30 seconds to a minute to start up. papis has no startup time as it's a cli app and searching is nearly instantaneous.

There is no reason for Zotero to be so slow. In fact, before switching I had to cut down on the number of papers it was managing because at one point it stopped loading altogether.

It's great you haven't run into poorly optimized software, but but not everyone is so lucky.

[1]: https://www.zotero.org/ [2]: https://github.com/papis/papis

193. maccard ◴[] No.43978597{5}[source]
Even 4-5 seconds is long enough for me to honestly get distracted. That is just so much time even on a single core computer from a decade ago.

On my home PC, in 4 seconds I could download 500MB, load 12GB off an SSD, perform 12 billion cycles (before pipelining ) per core (and I have 24 of them) - and yet miro still manages to bring my computer to its knees for 15 seconds just to load an empty whiteboard.

194. maccard ◴[] No.43978628{6}[source]
> You use a lot of words like "pretty close to", "nearly", "essentially", but 10, 20 years ago they WERE instant; applications from 10, 20 years ago should be so much faster today than they were on hardware from back then.

11 years ago I put in a ticket to slack asking them about their resource usage. Their desktop app was using more memory than my IDE and compilers and causing heap space issues with visual studio. 10 years ago things were exactly the same. 15 years ago, my coworkers were complaining that VS2010 was a resource hog compared to 10 years ago. My memory of loading photoshop in the early 2000’s was that it took absolutely forever and was slow as molasses on my home PC.

I don’t think it’s necessarily gotten worse, I think it’s always been pathetically bad.

replies(1): >>43979772 #
195. maccard ◴[] No.43978638{6}[source]
In a previous job, I was benchmarking compile times. I came in on a Monday and everything was 10-15% slower. IT had installed carbon black on my machine over the weekend, which was clearly the culprit. I sent WPA traces to IT but apparently the sales guys said there was no overhead so that was that.
196. m-schuetz ◴[] No.43978681{4}[source]
How does your vscode take 20+ seconds to launch? Mine launches in 2 seconds.
197. ndriscoll ◴[] No.43979016{8}[source]
Rendering a volume slider or some icons shouldn't take half a second, regardless. e.g. speaking of Carmack, Wolfenstein: Enemy Territory hits a consistent 333 FPS (the max the limiter allows) on my 9 year old computer. That's 3 ms/full frame for a 3d shooter that's doing considerably more work than a vector wifi icon.

Also, you could keep the status accurate because it only needs to update on change events anyway, events that happen on "human time" (e.g. you plugged in headphones or moved to a new network location) last for a practical eternity in computer time, and your pre-loaded icon probably takes a couple kB of memory.

It seems absurd to me that almost any UI should fail to hit your monitor's refresh rate as its limiting factor in responsiveness. The only things that make sense for my computer to show its age are photo and video editing with 50 MB RAW photos and 120 MB/s (bytes, not bits) video off my camera.

replies(1): >>43979470 #
198. CyberDildonics ◴[] No.43979077{12}[source]
Your original comment was saying you have to give up features and development speed to have faster software. I've seen this claim before many times, but it's always from people rationalizing not learning anything beyond the scripting languages they learned when they got in to programming.

I explained to you exactly why this is true, and it's because writing fast software just means doing some things slightly differently with a basic awareness of what makes programs fast, not because it is difficult or time consuming. Most egregiously bad software is probably not even due to optimization basics but from recomputing huge amounts of unnecessary results over and over.

What you said back is claims but zero evidence or explanation of anything. You keep talking about assembly language, but it has nothing to do with getting huge improvements for no time investment, because things like instruction count are not where the vast majority of speed improvements come from.

I mean, you’re the one who started this off with the insane claim that there’s no tradeoff, then claimed there are no optimizations available below C++ (i.e. C++ is the absolute most optimized code a person can write).

This is a hallucination that has nothing to do with your original point. The vast majority of software could be sped up 100x to 1000x easily if they were written slightly different. Asm optimizations are extremely niche with modern CPUs and compilers and the gains are minuscule compared to C++ that is already done right. This is an idea that permeates through inexperienced programmers, that asm is some sort of necessity for software that runs faster than scripting languages.

Go ahead and show me what specifically you are talking about with C++, assembly or any systems language or optimization.

Show me where writing slow software saves someone so much time, show me any actual evidence or explanation of this claim.

replies(2): >>43981618 #>>43983666 #
199. timewizard ◴[] No.43979314{5}[source]
There are inevitably those who don't know how to program but are responsible for hiring those that can. Language popularity is an obvious metric with good utility for that case.

Even so you haven't provided any compelling evidence that C or C++ made it's decisions to be more appealing or more popular.

200. timewizard ◴[] No.43979329{3}[source]
> Cost of cyberattacks globally[1]: O($trillions)

That's a fairly worthless metric. What you want is "Cost of cyberattacks / Revenue from attacked systems."

> We're really bad at measuring the secondary effects of our short-sightedness.

We're really good at it. There's an entire industry that makes this it's core competency... insurance. Which is great because it means you can rationalize risk. Which is also scary because it means you can rationalize risk.

201. vel0city ◴[] No.43979470{9}[source]
It's not the drawing an icon to a screen that takes the half second, it's querying out to hardware on driver stacks designed for PCI WiFi adapters from the XP era along with all the other driver statuses.

It's like how Wi-Fi drivers would cause lag from querying their status, lots of poorly designed drivers and archaic frameworks for them to plug in.

And I doubt any hardware you had when Wolfenstein:ET came out rendered the game that fast. I remember it running at less than 60fps back in '03 on my computer. So slow, poorly optimized, I get better frame rates in Half Life. Why would anyone write something so buggy, unoptimized, and slow?!

replies(1): >>43979484 #
202. ndriscoll ◴[] No.43979484{10}[source]
You don't need to query the hardware to know the network interface is up. A higher level of the stack already knows that along with info like addresses, routes, DNS servers, etc.

IIRC it ran at 76 fps (higher than monitor refresh, one of the locally optimal frame rates for move speed/trick jumps) for me back then on something like an GeForce FX 5200? As long as you had a dedicated GPU it could hit 60 just fine. I think it could even hit 43 (another optimal rate) on an iGPU, which were terrible back then.

In any case, modern software can't even hit monitor refresh latency on modern hardware. That's the issue.

replies(1): >>43979528 #
203. vel0city ◴[] No.43979528{11}[source]
It's not just showing "is the interface up", it's showing current signal strength, showing current ssid, showing results from the recent poll of stations, etc.

And then doing the same for Bluetooth.

And then doing the same for screen rotation and rotation lock settings. And sound settings, And then another set of settings. And another set of settings. All from different places of the system configuration while still having the backwards compatibility of all those old systems.

It's not a slowness on painting it. It can do that at screen refresh rates no problem. It's a question of querying all these old systems which often result in actual driver queries to get the information.

43fps? Sure sounds slow to me. Why not 333fps on that hardware? So bloated, so slow.

replies(1): >>43979573 #
204. ndriscoll ◴[] No.43979573{12}[source]
You're just listing mechanisms for how it might be slow, but that doesn't really make it sensible. Why would the OS query hardware for something like screen rotation or volume? It knows these things. They don't randomly change. It also knows the SSID it's connected to and the results of the last poll (which it continuously does to see if it should move).

And yes it should cache that info. We're talking bytes. Less than 0.0001% of the available memory.

Things were different on old hardware because old hardware was over 1000x slower. On modern hardware, you should expect everything to be instantaneous.

replies(1): >>43979734 #
205. peatmoss ◴[] No.43979673[source]
I've pondered this myself without digging into the specifics. The phrase "sufficiently smart compiler" sticks in my head.

Shower thoughts include whether there are languages that have features, other than through their popularity and representation in training corpuses, help us get from natural language to efficient code?

I was recently playing around with a digital audio workstation (DAW) software package called Reaper that honestly surprised me with its feature set, portability (Linux, macOS, Windows), snappiness etc. The whole download was ~12 megabytes. It felt like a total throwback to the 1990s in a good way.

It feels like AI should be able to help us get back to small snappy software, and in so doing maybe "pay its own way" with respect to CPU and energy requirements. Spending compute cycles to optimize software deployed millions of times seems intuitively like a good bargain.

206. vel0city ◴[] No.43979734{13}[source]
And yet doing an ipconfig or netsh wlan show interfaces isn't always instantaneous depending on your hardware and the rest of your configuration. I can't tell you what all it's actually doing under the hood, but I've definitely seen variations of performance on different hardware.

Sometimes the devices and drivers just suck. Sometimes it's not the software's fault it's running at 43fps.

I'm hitting the little quick settings area on my exceptionally cheap and old personal laptop. I haven't experienced that slowness once. Once again I imagine the other stuff running interrupting all the OS calls and what not loading this information causes it to be slow.

replies(1): >>43984020 #
207. genewitch ◴[] No.43979772{7}[source]
Photoshop for windows 3.11 loads in a couple seconds on a 100mhz pentium. Checked two days ago.
replies(1): >>43981215 #
208. dleink ◴[] No.43980021{7}[source]
You hit on something there, I could type faster than my 2400 baud connection but barring a bad connection those connections were pretty reliable.
209. tbihl ◴[] No.43980296{6}[source]
I used to think that was the worst, but then my org introduced me to pegging HDD write at 100% for half an hour at a time. My dad likes to talk about how he used to turn on the computer, then go get coffee; in my case it was more like turn on machine, go for a run, shower, check back, coffee, and finally... maybe.
210. userbinator ◴[] No.43981060{6}[source]
For a more balanced comparison, observe how long it takes for the new "Settings" app to open and how long interactions take, compared to Control Panel, and what's missing from the former that the latter has had for literally decades.
replies(1): >>43985562 #
211. userbinator ◴[] No.43981074{5}[source]
No, it's a heavily pessimized codebase.
212. agilob ◴[] No.43981119{3}[source]
Watch this https://www.youtube.com/watch?v=GC-0tCy4P1U
213. ryao ◴[] No.43981143{3}[source]
You live in the UNIX world, where this insanity is far less prevalent. Here is an example of what you are missing:

https://www.pcworld.com/article/2651749/office-is-too-slow-s...

214. xlii ◴[] No.43981157{3}[source]
It really depends at what you look.

You say snappy, but what is snappy? I right now have a toy project in progress in zig that uses users perception as a core concept.

Rarely one can react to 10ms jank. But when you get to bare metal development 10ms becomes 10 million of reasonably high level instructions that can be done. Now go to website, click. If you can sense a delay from JS this means that jank is approximately 100ms; does clicking that button, really should be 100 million instructions?

When you look close enough you will find that not only it’s 100 million instructions but your operating system along with processor made tens of thousands of tricks in the background to minimize the jank and yet you still can sense it.

Today even writing in non optimized, unpopular languages like Prolog is viable because hardware is mindblowing fast, and yet some things are slow, because we utilize that speed to decrease development costs.

215. nwallin ◴[] No.43981178{3}[source]
The proliferation of Electron apps is one of the main things. Discord, Teams, Slack, all dogshit slow. Uses over a gigabyte of RAM, and uses it poorly. There's a noticeable pause any time you do user input; type a character, click a button, whatever it is, it always takes just barely too long.

All of Microsoft's suite is garbage. Outlook, Visual Studio, OneNote.

Edge isn't slow, (shockingly) but you know what is? Every webpage. The average web page has 200 dependencies it needs to load--frameworks, ads, libraries, spyware--and each of those dependencies has a 99% latency of 2 seconds, which means on average, at least two of those dependencies takes 2 seconds to load, and the page won't load until they do.

Steam is slow as balls. It's 2025 and it's a 32 bit application for some reason.

At my day job, our users complain that our desktop application is slow. It is slow. We talk about performance a lot and how it will be a priority and it's important. Every release, we get tons of new features, and the software gets slower.

My shit? My shit's fast. My own tiny little fiefdom in this giant rat warrens is fast. It could be faster, but it's pretty fast. It's not embarrassing. When I look at a flamegraph of our code when my code is running, I really have to dig in to find where my code is taking up time. It's fine. I'm--I don't feel bad. It's fine.

I love this industry. We are so smart. We are so capable of so many amazing things. But this industry annoys me. We so rarely do any of them. We're given a problem, and the solution is some god forsaken abomination of an electron app running javascript code on the desktop and pumping bytes into and out of a fucking DOM. The most innovative shit we can come up with is inventing a virtual dumbass and putting it into everything. The most value we create is division, hate, fear, and loathing on social media.

I'm not mad. I'm just disappointed.

216. jeroenhd ◴[] No.43981196{3}[source]
Your 2021 MacBook and 2020 iPhone are top of the line devices. They'll be fine.

Buy something for half that price or less, like most people would be able to, and see if you can still get the same results.

This is also why I'd recommend people with lower budgets to buy high-end second hand rather than recent mid/low tier hardware.

217. maccard ◴[] No.43981215{8}[source]
That was 30 years ago, not 10.
replies(1): >>43981543 #
218. jeroenhd ◴[] No.43981230{5}[source]
XP had gray boxes and laggy menus like you wouldn't believe. It didn't even do search in the start menu, and maybe that was for the best because even on an SSD its search functionality was dog slow.

A clean XP install in a VM for nostalgia's sake is fine, but XP as actually used by people for a while quickly ground to a halt because of all the third party software you needed.

The task bar was full of battery widgets, power management icons, tray icons for integrated drivers, and probably at least two WiFi icons, and maybe two Bluetooth ones as well. All of them used different menus that are slow in their own respect, despite being a 200KiB executable that looks like it was written in 1995.

And the random crashes, there were so many random crashes. Driver programmes for basic features crashed all the time. Keeping XP running for more than a day or two by using sleep mode was a surefire way to get an unusual OS.

Modern Windows has its issues but the olden days weren't all that great, we just tolerated more bullshit.

219. genewitch ◴[] No.43981543{9}[source]
"early 2000s" was at least 22 years ago, as well. Sorry if this ruins your night. 100mhz 1994 vs 1000mhz in 2000, that's the only parallel i was drawing. 10x faster yet somehow adobe...
replies(1): >>43981810 #
220. ryao ◴[] No.43981618{13}[source]
For what it is worth, there is room for improvement in how people use scripting languages. I have seen Kodi extensions run remarkably slowly and upon looking at their source code to see why, I saw that everything was being done in a single thread with blocking on relatively slow network traffic. There was no concurrency being attempted at all, while all of the high performance projects I have touched in either C or C++ had concurrency. The plugin would have needed a major rewrite to speed things up, but it would have made things that took minutes take a few seconds if it were done. Unfortunately, doing the rewrite was on the wrong side of a simple “is it worth the time” curve, so I left it alone:

https://xkcd.com/1205/

Just today, I was thinking about the slow load times of a bloated Drupal site that heard partially attributable to a YouTube embed. I then found this, which claims to give a 224x performance increase over YouTube’s stock embed (and shame on YouTube for not improving it):

https://github.com/paulirish/lite-youtube-embed

In the past, I have written electron applications (I had tried Qt first, but had trouble figuring out how what I wanted after 20 hours of trying, and got what I needed from electron in 10). The electron applications are part of appliances that are based on the Raspberry Pi CM4. The electron application loads in a couple seconds on the CM4 (and less than 1 second on my desktop). Rather than using the tools web developers often use that produce absurd amounts of HTML and JS, I wrote nearly every line of HTML and JavaScript by hand (as I would have done 25 years ago) such that it was exactly what I needed and there was no waste. I also had client side JavaScript code running asynchronously after the page loaded. To be fair, I did use a few third party libraries like express and an on screen keyboard, but they were relatively light weight ones.

Out of curiosity, I did a proof of concept port of one application from electron to WebKitGTK with around 100 lines of C. The proof of concept kept nodejs running as a local express server that was accessed by the client side JavaScript running in the WebKitGTK front end via HTTP requests. This cut memory usage in half and seemed to launch slightly faster (although I did not measure it). I estimated that memory usage would be cut in half again if I rewrote the server side JavaScript in C. Memory usage would likely have dropped even more and load times would have become even quicker if I taught myself how to use a GUI toolkit to eliminate the need for client side HTML and JavaScript, but I had more important things to do than spend many more hours to incrementally improve what already worked (and I suspect many are in the same situation).

To give a final example, I had a POSIX shell script that did a few tasks, such as polling a server on its LAN for configuration updates to certain files and doing HA failover of another system were down, among other things. I realized the script iterated too slowly, so I rewrote it to launch a subshell as part of its main loop that does polling (with file locking to prevent multiple sub shells from doing polling at the same time). This allowed me to guarantee HA failover always happens within 5 seconds of another machine going down, and all it took were using concepts from C (threading and locking). They were not as elegant as actual C code (since subshells are not LWPs and thus need IPC mechanisms like file locks), but they worked. I know polling is inefficient, but it is fairly foolproof (no need to handle clients being offline when it is time for a push), robustness was paramount and development time was needed elsewhere.

In any case, using C (or if you must, C++) is definitely better than a scripting language, provided you use it intelligently. If you use techniques from high performance C code in scripting languages, code written in them often becomes many times faster. I only knew how to do things in other languages relatively efficiently because I was replicating what I would be doing in C (or if forced, C++). If I could use C for everything, I would, but I never taught myself how to do GUIs in C, so I am using my 90s era HTML skills as a crutch. However, reading this exchange (and writing this reply) has inspired me to make an effort to learn.

221. Cthulhu_ ◴[] No.43981646{5}[source]
Modern operating systems run at 120 or 144 hz screen refresh rates nowadays, I don't know if you're used to it yet but try and go back to 60, it should be pretty obivous when you move your mouse.
222. maccard ◴[] No.43981810{10}[source]
Ah sorry - I’m in my mid 30s so my early pc experiences as a “power user” were win XP, by which point photoshop had already bolted on the kitchen sink and autodesk required a blood sacrifice to start up.
223. Mashimo ◴[] No.43981815{4}[source]
Odd, I tested two news sides (tagesschau.de and bbc.com) and both load in 1 - 2 seconds. Airbnb in about 4 - 6 seconds though. My reddit never gets stuck, or if it does it's on all tabs because something goes wrong on their end.
224. astrange ◴[] No.43981909{4}[source]
That's because TOTK is designed to run on it, with careful compromises and a lot of manual tuning.

Nintendo comes up with a working game first and then adds the story - BotW/TotK are post-apocalyptic so they don't have to show you too many people on screen at once.

The other way you can tell this is that both games have the same story even though one is a sequel! Like Ganon takes over the castle/Hyrule and then Link defeats him, but then they go into the basement and somehow Ganon is there again and does the exact same thing again? Makes no sense.

replies(2): >>43983047 #>>43998156 #
225. dijit ◴[] No.43982197{5}[source]
I think you're right, maybe the disconnect is UI slowness?

I am annoyed at the startup time of programs that I keep closed and only open infrequently (Discord is one of those, the update loop takes a buttload of time because I don't use it daily), but I'm not annoyed when something I keep open takes 1-10s to open.

But when I think of getting annoyed it's almost always because an action I'm doing takes too long. I grew up in an era with worse computers than we have today, but clicking a new list was perceptibly instant- it was like the computer was waiting for the screen to catch up.

Today, it feels like the computer chugs to show you what you've clicked on. This is especially true with universal software, like chat programs, that everyone in an org is using.

I think Casey Muratori's point about the watch window in visual studio is the right one. The watch window used to be instant, but someone added an artificial delay to start processing so that the CPU wouldn't work when stepping fast through the code. The result is that, well, you gotta wait for the watch window to update... Which "feels bad".

https://www.youtube.com/watch?v=GC-0tCy4P1U

226. ryao ◴[] No.43982401{4}[source]
As a regular user of vim, tmux and cscope for programming in C, may I say that not only do I prefer the old tools, but I use them regularly.
227. ryao ◴[] No.43982568{5}[source]
I have a 16 core Ryzen 9 with 128GB of RAM. I have not noticed any slowness in Signal. This might be caused by differences in our operating systems. It sounds like you run Windows. I run Gentoo Linux.
228. TingPing ◴[] No.43982677{6}[source]
Qualcomm CPUs outperform Apple now, Apple was just early and had exclusivity for manufacturing 3nm at TSMC.
229. ryao ◴[] No.43982703{4}[source]
While I do not have data comparing them, I have a few remarks:

1. Scammer Payback and others are documenting on-going attacks that involve social engineering that are not getting the attention that they deserve.

2. You did not provide any actual data on the degree to which bounds checks are “large”. You simply said they were because they are a subset of a large group. There are diseases that only affect less than 100 people in the world that do not get much attention. You could point out that the people affected are humans, which is a group that consists of all people in the world. Thus, you can say that one of these rare diseases affects a large number of people and thus should be a priority. At least, that is what you just did with bounds checks. I doubt that they are as rare as my analogy would suggest, but the point is that the percentage is somewhere between 0 and 70% and without any real data, your claim that it is large is unsubstantiated. That being said, most C software I have touched barely uses arrays for bound checks to be relevant, and when it does use arrays, it is for strings. There are safe string functions available for use like strlcpy() and strlcat() that largely solve the string issues by doing bounds checks. Unfortunately, people keep using the unsafe functions like strcpy() and strcat(). You would have better luck if you suggested people use safe string handling functions rather than suggest compilers insert bounds checks.

3. Your link mentions CHERI, which a hardware solution for this problem. It is a shame that AMD/Intel and ARM do not modify their ISAs to incorporate the extension. I do not mean the Morello processor, which is a proof of concept. I mean the ISA specifications used in all future processors. You might have more luck if you lobby for CHERI adoption by those companies.

230. makapuf ◴[] No.43982775{7}[source]
VScode isn't an IDE either, visual studio is one. After that it all depends what plugins you loaded in both of them.
231. voidspark ◴[] No.43983023[source]
As the other guy said, top of the line CPUs today are roughly ~100x faster than 20 years ago. A single core is ~10x faster (in terms of instructions per second) and we have ~10x the number of cores.
replies(1): >>43984153 #
232. Thedarkb ◴[] No.43983047{5}[source]
The framing device for The Legend of Zelda games is that it's a mythological cycle in which Link, Ganon, and Zelda are periodically reborn and the plot begins anew with new characters. It lets them be flexible with the setting, side quests, and characters as the series progresses and it's been selling games for just shy of forty years.
replies(1): >>43988432 #
233. bluGill ◴[] No.43983400{4}[source]
CVEs are never written for social attacks. Which is fair what they are trying to do. However attacking the right humans and not software is easier.
234. josefx ◴[] No.43983438[source]
> on countless layers of abstractions

Even worse, our bottom most abstraction layers pretend that we are running on a single core system from the 80s. Even Rust got hit by that when it pulled getenv from C instead of creating a modern and safe replacement.

235. sorcerer-mar ◴[] No.43983666{13}[source]
So again, what you're saying is there is a tradeoff. You just think it should be made in a different place than where the vast majority of engineers in the world choose to make it. That's fine! It's probably because they're idiots and you're really smart, but it's obviously not because there's no tradeoff.

> that asm is some sort of necessity for software that runs faster than scripting languages.

It seems you're not tracking the flow of the conversation if you believe this is what I'm saying. I am saying there is always a way to make things faster by sacrificing other things developer productivity, feature sets, talent pool, or distribution methods. You agree with me, it turns out!

replies(1): >>43984814 #
236. CelestialMystic ◴[] No.43983788{6}[source]
Every Wednesday my PC becomes so slow it is barely usable. It is the Windows Defender scans. I tried doing a hack to put it on a lower priority but my hands are tied by IT.
replies(1): >>43985002 #
237. CelestialMystic ◴[] No.43983833{6}[source]
This is exactly it. My Debian Install on older hardware than my work machine is relatively snappy. The real killer is the Windows Defender Scans once a week. 20-30% CPU usage for the entire morning because it is trying to scan some CDK.OUT directory (if I delete the directory, the scan doesn't take nearly as long).
238. dijit ◴[] No.43984020{14}[source]
I don't know what operating system you're talking about, but the bottleneck on my linux machine for asking for interfaces is the fact that stdout is write blocking.

I routinely have shy of 100 network interfaces active and `ip a` is able to query everything in nanoseconds.

replies(1): >>43985895 #
239. topaz0 ◴[] No.43984153{3}[source]
And the memory quantity, memory speed, disk speed are also vastly higher now than 20 years ago.
240. dijit ◴[] No.43984268{8}[source]
I struggle because everything you're saying is your subjective truth, and mine differs.

Aside from the seminal discussion about text input latency from Dan Luu[0] there's very little we can do to disprove anything right now.

Back in the day asking my computer to "do" something was the thing I always dreaded, I could navigate, click around, use chat programs like IRC/ICQ and so on, and everything was fine, until I opened a program or "did" something that caused the computer to think.

Now it feels like there's no distinction between using a computer and asking it to do something heavy. The fact that I can't hear the harddisk screaming or the fan spin up (and have it be tied to something I asked the computer to do) might be related.

It becomes expectation management at some point, and nominally a "faster computer" in those days meant that those times I asked the computer to do something the computer would finish it's work quicker. Now it's much more about how responsive the machine will be... for a while, until it magically slows down over time again.

[0]: https://danluu.com/input-lag/

replies(1): >>43985308 #
241. MonkeyClub ◴[] No.43984314[source]
> Optimization should be treated as competitive advantage

That's just so true!

The right optimizations at the right moment can have a huge boost for both the product and the company.

However the old tenet regarding premature optimization has been cargo-culted and expanded to encompass any optimization, and the higher-ups would rather have ICs churn out new features instead, shifting the cost of the bloat to the customer by insisting on more and bigger machines.

It's good for the economy, surely, but it's bad for both the users and the eventual maintainers of the piles of crap that end up getting produced.

242. jen20 ◴[] No.43984373{4}[source]
> This is on an i9

On which OS?

replies(1): >>44014430 #
243. MonkeyClub ◴[] No.43984456{8}[source]
> I actively avoid human interactive input in my commercial activity

Not to mention that the "human input" can be pre-scripted to urge you to purchase more, so it's not genuinely a human interaction, it's only a human delivering some bullshit "value add" marketing verbiage.

244. Mr_Minderbinder ◴[] No.43984465{3}[source]
I notice a pattern in the kinds of software that people are complaining about. They tend to be user-facing interactive software that is either corporate, proprietary, SaaS, “late-stage” or contains large amounts of telemetry. Since I tend to avoid such software, the vast majority of software I use I have no complaints about with respect to speed and responsiveness. The biggest piece of corporate bloatware I have is Chromium which (only) takes 1-2 seconds to launch and my system is not particularly powerful. In the corporate world bloat is a proxy for sophistication, for them it is a desirable feature so you should expect it. They would rather you use several JavaScript frameworks when the job could be done with plain HTML because it shows how rich/important/fashionable/relevant/high-tech they are.
245. CyberDildonics ◴[] No.43984814{14}[source]
So again, what you're saying is there is a tradeoff. You just think it should be made in a different place than where the vast majority of engineers in the world choose to make it.

Show me what it is I said that makes you think that.

That's fine! It's probably because they're idiots and you're really smart, but it's obviously not because there's no tradeoff.

Where did I say any of this? I could teach anyone to make faster software in an hour or two, but myths like the ones you are perpetuating make people think it's difficult or faster software is more complicated.

You originally said that making software faster 'decreases velocity and sacrifices features' but you can't explain or backup any of that.

You agree with me, it turns out!

I think what actually happened is that you made some claims that get repeated but they aren't from your actual experience and you're trying to avoid giving real evidence or explanations so you keep trying to shift what you're saying to something else.

The truth is that if someone just learns to program with types and a few basic techniques they can get away from writing slow software forever and it doesn't come at any development speed, just a little learning up front that used to be considered the basics.

Next time you reply show me actual evidence of the slow software you need to write to save development time. I think the reality is that this is just not something you know a lot about, but instead of learning about it you want to pretend there is any truth to what you originally said. Show me any actual evidence or explanation instead of just making the same claims over and over.

replies(1): >>43984985 #
246. MonkeyClub ◴[] No.43984858{4}[source]
> Java and python

Java and Python are not on the same body of water, let alone the same boat.

You can see some comparisons here:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

replies(1): >>43989659 #
247. sorcerer-mar ◴[] No.43984985{15}[source]
> I could teach anyone to make faster software in an hour or two,

Is one or two hours of two engineers' time more than zero hours, or no?

> just a little learning up front

Is a little learning more than zero learning, or no?

IMO your argument would hold a lot more weight if people felt like their software (as users) is slow, but many people do not. Save for a few applications, I would prefer they keep their same performance profile and improve their feature set than spend any time doing the reverse. And as you have said multiple times now: it does indeed take time!

If your original position was what it is now, which is "there's low hanging fruit," I wouldn't disagree. But what you said is there's no tradeoff. And of course now you are saying there is a tradeoff... so now we agree! Where any one person should land on that tradeoff is super project-specific, so not sure why you're being so assertive about this blanket statement lol.

replies(1): >>43985630 #
248. accrual ◴[] No.43985002{7}[source]
Same. I had nearly full administrative privs on the laptop, yet I get "Access denied" trying to deprioritize the scan. We got new hardware recently, so we should be good until the scanners catch up and consume even more resources...
replies(1): >>43989319 #
249. vel0city ◴[] No.43985308{9}[source]
> Back in the day asking my computer to "do" something was the thing I always dreaded, I could navigate, click around, use chat programs like IRC/ICQ and so on, and everything was fine, until I opened a program or "did" something that caused the computer to think.

This is exactly what I'm talking about. When I'm actually using my computer, its orders of magnitude faster. Things where I'd do one click and then practically have to walk away and come back to see if it worked happen in 100ms now. This is the machine being way faster and far more responsive.

Like, OK, some Apple IIe had 30ms latency on a key press compared to 50ms on a Haswell desktop with a decent refresh rate screen or 100ms on some Thinkpad from 2017, assuming these machines aren't doing anything.

But I'm not usually doing nothing when I want to press the key. I've got dozens of other things I want my computer to do. I want it listening for events on a few different chat clients. I want it to have several dozen web pages open. I want it to stream music. I want it to have several different code editors open with linters examining my code. I want it paying attention if I get new mail. I want it syncing directories from this machine to other machines and cloud storage. I want numerous background agents handling tons of different things. Any one of those tasks would cause that Apple IIe to crawl instantly and it doesn't even have the memory to render a tiny corner of my screen.

The computer is orders of magnitude "faster", in that it is doing many times as much work much faster even when it's seemingly just sitting there. Because that's what we expect from our computers these days.

Tell me how fast a button press is when you're on a video call on your Apple IIe while having a code linter run while driving a 4K panel and multiple virtual desktops. How's its Unicode support?

replies(1): >>44003263 #
250. vel0city ◴[] No.43985562{7}[source]
I'm far faster changing my default audio device with the new quick settings menu than going Start > Control Panel > Sound > Right click audio device > Set as Default. Now I just click the quick settings > the little sound device icon > chosoe a device.

I'm far faster changing my WiFi network with the new quick settings menu than going Start > Control Panel > Network and Sharing Center (if using Vista or newer) > Network Devices > right click network adapter > Connect / Disconnect > go through Wizard process to set up new network. Now I just click the quick settings, click the little arrow to list WiFi networks, choose the network, click connect. Way faster.

I'm also generally far faster finding whatever setting in the Settings menu over trying to figure out which tab on which little Control Panel widget some obscure setting is, because there's a useful search box that will pull up practically any setting these days. Sure, maybe if you had every setting in Control Panel memorized you could be faster, but I'm far faster just searching for the setting I'm looking for at the moment for anything I'm not regularly changing.

The new Settings area, now that it actually has most things, is generally a far better experience unless you had everything in Control Panel committed to muscle memory. I do acknowledge though there are still a few things that aren't as good, but I imagine they'll get better. For most things most users actually mess with on a regular basis, it seems to me the Settings app is better than Control Panel. The only thing that really frustrates me with Settings now on a regular basis is only being able to have one instance of the app open at a time, a dumb limitation.

Every time I'm needing to mess with something in ancient versions of Windows these days is now a pain despite me growing up with it. So many things nested in non-obvious areas, things hidden behind tab after tab of settings and menus. Right click that, go to properties, click that, go to properties on that, click that button, go to the Options tab, click Configure, and there you go that's where you set that value. Easy! Versus typing something like the setting you want to set into the search box in Settings and have it take you right to that setting.

251. CyberDildonics ◴[] No.43985630{16}[source]
Now learning something new for a few hours means we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity." ?

You made up stuff I didn't say, you won't back up your claims with any sort of evidence, you keep saying things that aren't relevant, what is the point of this?

This thread is john carmack saying the world could get by with cheaper computers if software wasn't so terrible and you are basically trying to argue with zero evidence that software needs to be terrible.

Why can't you give any evidence to back up your original claim? Why can't you show a single program fragment or give a single example?

replies(1): >>43986120 #
252. vel0city ◴[] No.43985895{15}[source]
Considering this whole conversation is about sometimes some people have a little bit of slowness drawing the quick settings area in Windows 11 and I gave commands like "netsh" it should be pretty dang obvious which OS we're talking about. But I guess some people have challenges with context clues.

And once again, on some Linux machines I've had over the years, doing an ip a command could hang or take a while if the device is in a bad state or being weird. It normally returns almost instantly, but sometimes has been slow to give me the information.

253. dahart ◴[] No.43985975{5}[source]
> The eye perceives at about 10 hz.

Not sure what this means; the eye doesn’t perceive anything. Maybe you’re thinking of saccades or round-trip response times or something else? Those are in the ~100ms range, but that’s different from whether the eye can see something.

This paper shows pictures can be recognized at 13ms, which is faster than 60hz, and that’s for full scenes, not even motion tracking or small localized changes. https://link.springer.com/article/10.3758/s13414-013-0605-z

replies(1): >>44009925 #
254. sorcerer-mar ◴[] No.43986120{17}[source]
Okay let's do it this way.

It's obviously true the world could get by with cheaper computers if software was more performant.

So why don't we?

replies(1): >>43986756 #
255. ryao ◴[] No.43986597{3}[source]
Secure string handling functions like strlcpy() and strlcat() do have bounds checks. Not everyone uses them sadly.
replies(1): >>43988353 #
256. CyberDildonics ◴[] No.43986756{18}[source]
Because people spread and believe misinformation about it being difficult to avoid writing grossly inefficient software.

We know that it isn't difficult because if it was you would have had a single shred of evidence after a dozen comments of me asking for it.

257. oblio ◴[] No.43988353{4}[source]
And that again, is the point. That stuff should be built-in and almost non-negotiable. It should be a lot more work to do the unsafe thing (see: Rust).
replies(1): >>43988550 #
258. astrange ◴[] No.43988432{6}[source]
ToTK is a direct sequel to BoTW set a few years later and supposedly starring literally the same people though.
259. ryao ◴[] No.43988550{5}[source]
They are negotiable in Rust too. Just use the unsafe keyword everywhere. If you want to use a language where they are non-negotiable, use JavaScript.

Secure string functions are built into C these days. The older error prone ones are as well. You can ban the older functions from code bases, but it is hard to justify outright bans for all of them. If you audit string function usage in a non-trivial codebase, you likely will find instances where the old, error prone, string functions are used in safe ways and leaving them alone makes sense. This is particularly the case when constant length strings are used with constant length buffers. Here is an example where the old string functions are just as safe as the secure string functions:

https://github.com/openzfs/zfs/blob/master/etc/systemd/syste...

I found it when I audited the string function usage in OpenZFS with the goal of removing all uses of the older string functions. While I replaced every instance old string function where the secure string functions mitigated even the slightest risk, I could not bring myself to remove these, since there was nothing wrong with them. After my patches, a few string functions were then banned from the codebase with a build system hack put in place to break the build process if someone tried to reintroduce the banned functions.

260. ryao ◴[] No.43988812{3}[source]
It depends on the loop. Here are a bunch of loops that need bounds checks on every loop iteration:

https://github.com/openzfs/zfs/commit/303678350a7253c7bee9d6...

You could argue otherwise and you would not be wrong since the codebase had not given this code inputs that would overrun the buffer. However, any case where you loop until the string is built needs a check on each iteration unless you can both prove that the buffer will always be long enough for any possible output and guarantee that future changes will preserve this property. The former is a pain (but doable) while the latter was infeasible, which is why the bounds checks were added.

That said, something as innocuous as printing a floating value could print 308 characters where you only expected 10 or less:

https://github.com/openzfs/zfs/commit/f954ea26a615cecc8573bb...

Edge cases causing things to print beyond what was expected are a good reason why people should use bounds checks when building strings, since truncation is fail safe, while overrunning the buffer is fail dangerous. I am a fan of the silent truncation done in both patches, since in both cases, truncation is harmless. For cases where silent truncation is not harmless, you can use detect the truncation by checking the return value of `snprintf()` and react accordingly. It should be said that the scnprintf() function's return value does not permit truncation detection and `snprintf()` should be used if you want that.

Note that I am the author of those two patches, so it is natural that I like the approach I used. They were written as part of an audit of the OpenZFS codebase I did when bug hunting for fun.

261. fsflover ◴[] No.43989097[source]
> And we do not know of a way to solve security.

Security through compartmentalization approach actually works. Compare the number of CVEs of your favorite OS with those for Qubes OS: https://www.qubes-os.org/security/qsb/

replies(1): >>43991031 #
262. ◴[] No.43989229[source]
263. CelestialMystic ◴[] No.43989319{8}[source]
You basically have no control over it. I don't mind it doing a virus scan but could it do it out of hours.

People wonder why I don't run Windows outside of gaming and it because I don't really know what the system is doing anymore.

264. ryao ◴[] No.43989387{3}[source]
> The trouble is how software has since wasted a majority of that performance improvement.

It has been sacrificed in the name of developer productivity, since hardware is cheap compared to the cost of writing efficient software outside of performance critical situations. The result is that the world is drowning in inefficient software.

265. monkeyelite ◴[] No.43989659{5}[source]
Yes. And why is that?
replies(1): >>43989892 #
266. Aurornis ◴[] No.43989886{4}[source]
> Yes. Java and python are not competitive for graphics and audio processing.

Because Python is an interpreted language and Java has garbage collection, not because they have bounds checking.

Using a language like Rust the bounds checking overhead compared to C code is usually minimal, like 0-5%. The compiler can even optimize away many bounds checks (safely) if you structure your code properly, such as with iterators.

Here's an example article : https://shnatsel.medium.com/how-to-avoid-bounds-checks-in-ru...

Make sure you read the note about how the author was mistaken about the 15% speedup from bounds checking. The actual bounds check change was much less, like a couple percent. You can even structure your code so that the bounds check disappears altogether when the compiler can confirm safety at compile time.

267. Aurornis ◴[] No.43989892{6}[source]
Interpreted languages and garbage collection.

Bounds checking barely registers as a factor.

replies(1): >>43991013 #
268. Aurornis ◴[] No.43989918{5}[source]
> It's not an exaggeration.

The comment I quoted was about 20 second load times, not a slight delay before something is clickable. That's the exaggeration.

FWIW, I don't see the same slowness in Signal, like the other poster.

269. monkeyelite ◴[] No.43991013{7}[source]
You can write Java that uses arrays that are allocated once - will it have loops as fast as C? Why not?

And furthermore I don’t suspect you’re proposing we should be using C + bounds checking (that’s already a Gcc flag?). But rather bounds checking is one of many safety features.

The whole pitch of Java is exactly what OP said - let’s just pay 10-30% cost to get these nice portability and memory benefits over C++, and it didn’t work that way especially as memory speeds have diverged from CPU.

270. ngneer ◴[] No.43991031{3}[source]
Playing devil's advocate, compare their popularity. You may have fallen prey to the base rate fallacy.
replies(1): >>43992387 #
271. ngneer ◴[] No.43991085{3}[source]
I am not suggesting we refuse to close one window because another window is open. That would be silly. Of course we should close the window. Just pointing out that the "950X" example figure cited fails to account for the full cost (or overestimates the benefit).
272. fsflover ◴[] No.43992387{4}[source]
https://forum.qubes-os.org/t/are-we-safe-because-we-re-a-rel...

tl;dr: Qubes security relies on Xen, which is quite popular. But even Xen has more vulnerabilities than Qubes thanks to a clever design: https://www.qubes-os.org/security/xsa/#statistics.

replies(1): >>43998248 #
273. yifanl ◴[] No.43998156{5}[source]
> That's because TOTK is designed to run on it, with careful compromises and a lot of manual tuning.

Should I draw from this conclusion that modern software is not designed to run on modern hardware?

replies(1): >>43999234 #
274. ngneer ◴[] No.43998248{5}[source]
Touché
275. astrange ◴[] No.43999234{6}[source]
Indeed it isn't. That's hard when everything is fighting against it.

Nintendo can do it because they only have one hardware target (unlike multiplatform games) and they can spend an extra year or two doing performance work.

276. const_cast ◴[] No.44001898{6}[source]
> Couldn't you create even faster software if you went down another level? Why don't you?

No, actually, C++ is pretty much as low as you can go.

Well, I mean, you can go lower but you're not going to get performance benefits. You could go down to C. But C is going to be more of the same or, more likely, slower. You're going to be swapping Templates for things like void *.

You could go to assembly, but let's be honest - who can write better assembly than a C++ compiler?

replies(1): >>44004851 #
277. Dylan16807 ◴[] No.44003263{10}[source]
But I can see that all the background stuff is using less than one core. That is not an excuse for bad foreground performance.

The stuff that used to be slow involved hard drive access, but today even when programs don't need to touch the disk they often manage to rack up significant delays. Not to mention how SSDs have 100x less latency than hard drives.

And if unicode support is causing serious delays when I'm only using one block of simple-rendering characters, then the library was designed badly.

278. sorcerer-mar ◴[] No.44004851{7}[source]
> You could go to assembly, but let's be honest - who can write better assembly than a C++ compiler?

In other words, yes you can eek out more performance at lower levels. Then again at the silicon level. No, it's not worth it in 99.9999% of scenarios (because there are tradeoffs).

279. JoeAltmaier ◴[] No.44009921{6}[source]
Sure, that can all be true, and it still doesn't make 500hz make a particle of use.
280. JoeAltmaier ◴[] No.44009925{6}[source]
From that, then, we conclude that somehow 500Hz is important or meaningful?
replies(1): >>44015152 #
281. maccard ◴[] No.44014430{5}[source]
Windows 11. It was windows 10 before that and it was still bad but definitely got worse with win11. Unsure if it win10 vs win11 was the culprit or a windows defender change happened at the same time.
282. dahart ◴[] No.44015152{7}[source]
Is it only 500 or 10, and nothing in between? You could have argued against 500 with the GP comment instead of countering with something that’s demonstrably untrue. I handed you the study you asked for.

Movies’ 24Hz is too slow, just watch a horizontal pan. 24Hz is good enough for slow things, but it was chosen that low for cost reasons, not because it’s the limit of perception. US TV’s 60hz interlace isn’t the limit either, which is also shown with horizontal pans. 60hz progressive looks different than 30hz, just watch YouTube or turn on frame interpolation on a modern TV.

The limit of meaningful motion tracking perception might be in the 100-200Hz range. The reason 500Hz is meaningful to gamers is, I think, because of latency rather than frequency. Video systems often have multiple frames of latency, so there actually is a perceptible difference to them between 60Hz and 500Hz.

replies(1): >>44017480 #
283. ajolly ◴[] No.44017480{8}[source]
I find a big difference between running my desktop at 60 HZ versus 144 HZ in how smooth the mouse moves and how easy it is to click on a small area of the screen with fast mouse movement.
replies(1): >>44018115 #
284. dahart ◴[] No.44018115{9}[source]
Yep, and I think a lot of people who’ve tried it would agree. I’ve heard the same from others too, and I believe it.

I don’t know what typical display latency of just browsing files with a monitor these days. I’d guess it’s probably a few frames, and I’d bet we would be able to feel the difference between 3 frames of latency at 144hz and 1 frame of latency at 144hz. I’m also curious if mouse cursor motion blur would make any difference.