Too many people have the "Premature optimization is the root of all evil" quote internalized to a degree they won't even think about any criticisms or suggestions.
And while they might be right concerning small stuff, this often piles up and in the end, because you choose several times not to optimize, your technology choices and architecture decisions add up to a bloated mess anyway that can't be salvaged.
Like, when you choose a web framework for a desktop app, install size, memory footprint, slower performance etc. might not matter looked at individually, but in the end it all might easily add up and your solution might just suck without much benefit to you. Pragmatism seems to be the hardest to learn for most developers and so many solutions get blown out of proportion instantly.
It's good enough, and for example React Native is spending years and millions in more optimizations to make their good enough faster, the work they do is well beyond my pay grade. (https://reactnative.dev/blog/2025/10/08/react-native-0.82#ex...)
Do we need a dozen components of half a million lines each maintained by a separate team for the hotdesk reservation page? I'm not sure, but I'm definitely not willing to endure the conversation that would follow from asking.
Indeed, if a language and framework has slow code execution, but facilitates efficient querying, then it can still perform relatively well.
And the answer is almost always "nothing" because "good enough" is fine.
People like to shit on development tools like Electron, but the reality is that if the app is shitty on Electron, it'd probably be just as shitty on native code, because it is possible to write good Electron apps.
Yes. I’ve been working for years on building a GPU-based scientific visualization library entirely in C, [1] carefully minimizing heap allocations, optimizing tight loops and data structures, shaving off bytes of memory and microseconds of runtime wherever possible. Meanwhile, everyone else seems content with Electron-style bloat weighing hundreds of megabytes, with multi-second lags and 5-FPS interfaces. Sometimes I wonder if I’m just a relic from another era. But comments like this remind me that I’m simply working in a niche where these optimizations still matter.
Any different interpretation in my opinion leads to slow, overbloated software.
For customer facing stuff, I think it's worth looking into frameworks that do backend templating and then doing light DOM manipulation to add dynamism on the client side. Frameworks like Phoenix make this very ergonomic.
It's a useful tool to have in the belt.
A 500MB Electron app can be easily a 20MB Tauri app.
In physical disciplines, like mechanical engineering, civil engineering, or even industrial design, there is a natural push towards simplicity. Each new revision is slimmer & more unified–more beautiful because it gets closer to being a perfect object that does exactly what it needs to do, and nothing extra. But in software, possibly because it's difficult to see into a computer, we don't have the drive for simplicity. Each new LLVM binary is bigger than the last, each new HTML spec longer, each new JavaScript framework more abstract, each new Windows revision more bloated.
The result is that it's hard to do basic things. It's hard to draw to the screen manually because the graphics standards have grown so complicated & splintered. So you build a web app, but it's hard to do that from scratch because the pure JS DOM APIs aren't designed for app design. So you adopt a framework, which itself is buried under years of cruft and legacy decisions. This is the situation in many areas of computer science–abstractions on top of abstractions and within abstractions, like some complexity fractal from hell. Yes, each layer fixes a problem. But all together, they create a new problem. Some software bloat is OK, but all software bloat is bad.
Security, accessibility, and robustness are great goals, but if we want to build great software, we can't just tack these features on. We need to solve the difficult problem of fitting in these requirements without making the software much more complex. As engineers, we need to build a culture around being disciplined about simplicity. As humans, we need to support engineering efforts that aren't bogged down by corporate politics.
I had to write Android app recently. I don't like bloat. So I disabled all libraries. Well, I did it, but I was jumping over many hoops. Android Development presumes that you're using appcompat libraries and some others. In the end my APK was 30 KB and worked on every smartphone I was interested (from Android 8 to Android 16). Android Studio Hello World APK is about 2 MB, if I remember correctly. This is truly madness.
We could go further and have a language written to run in a sandbox VM especially for that with a GUI library designed for the task instead of being derived from a document format.
Or actually not, and the list doesn't help go beyond "users have more resources, so it's just easier to waste more resources"
> Layers & frameworks
There are a million of these, with performance difference of orders of magnitude. So an empty reference explains nothing re bloat
But also
> localization, input, vector icons, theming, high-DPI
It's not bloat if it allows users to read text in an app! Or read one that's not blurry! Or one that doesn't "burn his eyes"
> Robustness & error handling / reporting.
Same thing, are you talking about a washing machine sending gigabytes of data per day for no improvement whatsoever "in robustness"? Or are you taking about some virtualized development environment with perfect time travel/reproduction, where whatever hardware "bloat" is needed wouldn't even affect the user? What is the actual difference between error handling in the past besides easy sending of your crash dumps?
> Engineering trade-offs. We accept a larger baseline to ship faster, safer code across many devices.
But we do not do that! The code is too often slower precisely because people have a ready list of empty statements like this
> Hardware grew ~three orders of magnitude. Developer time is often more valuable than RAM or CPU cycles
What about the value of time/resources of your users ? Why ignore reality outside of this simplistic dichotomy. Or will the devs not even see the suffering because the "robust error handling and reporting" is nothing of the sort, it mostly /dev/nulls a lot of user experience?
Can you elaborate more on how this works? Do you mean JS loading server generated HTML into the DOM?
Note that with this approach you don't need to "render" anything, browser already done it for you. You merely attaching functionality to DOM elements in the form of Component instances.
Right off the bat it'll save hundreds of MB in app size with a noticeable startup time drop , so no, it won't be just as shitty.
> because it is possible to write good Electron apps.
The relevant issue is the difficulty in doing that, not the mere possibility.
I entirely agree. It is what I do when I have to - although I mostly do simple JS as I am a backend developer really, and if I do any front end its "HTML plus a bit of JS" and I just write JS loading stuff into divs by ID.
When i have worked with front end developers doing stuff in react it has been a horrible experience. In the very worst case they used next.js to write a second backend that sat between my existing Django backend (which had been done earlier) and the front end. Great for latency! It was an extreme example but it really soured my attitude to complex front ends. The project died.
Putting React with those two is a wild take.
> 99% percent of websites would work a lot better with SSR and a few lines of JavaScript here and there and there is zero reason to bring anything like React to the table.
Probably but as soon as you have a modicum of logic in your page the primitives of the web are a pain to use.
Also, I must be able to build stuff in the 1% space. I actually did it before: I built an app that's entirely client-side, with Vue, and "serverless" in the sense that it's distributed in the form of one single HTML file. Although we changed that in the last few months to host it on a proper server.
The level of psychological trauma that some back-end devs seem to endure is hilarious though. Like I get it, software sucks and it's sad but no need to be dramatic about it.
And btw, re forbidding stuff: no library, no process, no method can ever substitute to actually knowing what you're doing.
I've never seen a real world Electron app with a large userbase that actually has that many dependencies or performance issues that would be resolved by writing it as a native app. It's baffling to me how many developers don't realize how much latency is added and memory is used by requiring many concurrent HTTP requests. If you have a counterexample I'd love to see it.
That's hilarious.
Casey Muratori truly is right when he says to "non-pessimize" software (= make it do what it should do and not more), before optimizing it.
One example is skirt length. You have fashion and the only thing about it is change. If everybody's wearing short skirts, then longer skirts will need to be launched in fashion magazines and manufactured and sent to shops in order to sell more. The actual products have not functionally changed in centuries.
Databases in particular, since that’s my job. “This query runs in 2 msec, it’s fast enough.” OK, but it gets called 10x per flow because the ORM is absurdly stupid; if you cut it down by 500 microseconds, you’d save 5 msec. Or if you’d make the ORM behave, you could save 18 msec, plus the RTT for each query you neglected to account for.
Turns out modern ubuntu will only install Firefox as a snap. And snap will then automatically grow to fill your entire hard drive for no good reason.
I'm not quite sure how people decided this was an approach to package management that made sense.
Yeah I find it frustrating how many people interpret that quote as "don't bother optimizing your software". Here's the quote in context from the paper it comes from:
> Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
> Yet we should not pass up our opportunities in that critical 3 %. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.
Knuth isn't saying "don't bother optimizing", he's saying "don't bother optimizing before you profile your code". These are two very different points.
If we worked hard to keep OS requirements to a minimum- could we be looking at unimaginably improved battery life? Hyper reliable technology that lasts many years? Significantly more affordable hardware?
We know that software bloat wastes RAM and CPU, but we can't know what alternatives we could have if he hadn't spent our metaphorical budget on bloat already.
Quoting Knuth without the entire context is also contributing to bloat.
> Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered.
> We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
That said, all of engineering is a tradeoff, and tradeoffs mean accepting some amount of bad in exchange for some amount of good.
In these times, though, companies seem to be very willing to accept bloat for marginal or nonexistent returns, and this is one of the reasons why, in my opinion, so much of the software being released these days is poor.
My boss (and mentor) from 25 years ago told me to think of the problems I was solving with a 3-step path:
1. Get a solution working
2. Make the solution correct
3. Make the solution efficient
Most importantly, he emphasizes that the work must be done in that order. I've taken that everywhere with me.
I think one of the problems is that quite often, due to business pressure to ship, step 3 is simply skipped. Often, software is shipped half-way through step 2 -- software that is at best partially correct.
The pushes the problem down to the user, who might be building a system around the shipped code. This compounds the problem of software bloat, as all the gaps have to be bridged.
Stack enough layers - framework on library on abstraction on dependency - and nobody understands what the system does anymore. Can't hold it in your head. Debugging becomes archaeology through 17 layers of indirection. Features work. Nobody knows why. Nobody dares touch them.
TFA touches this when discussing complexity ("people don't understand how the entire system works"). But treats it as a separate issue. It's not. Bloat creates unknowable systems. Unknowable systems are unmaintainable by definition.
The "developer time is more valuable than CPU cycles" argument falls apart here. You're not saving time. You're moving the cost. The hours you "saved" pulling in that framework? You pay them back with interest every time someone debugs a problem spanning six layers of abstraction they don't understand
In either case you end up with a fresh instance of the browser (unless things have changed in Tauri since last time I looked), distinct from the one serving you generally as an actual browser, so both do have the same memory footprint in that respect. So you are right, that is an issue for both options, but IME people away from development seem more troubled by the package size than interactive RAM use. Tauri apps are likely to start faster from cold as it is loading a complete new browser for which every last byte used needs to be rad from disk, I think the average non-dev user will be more concerned about that than memory use.
There have been a couple of projects trying to be Electron, complete with NodeJS, but using the user's currently installed & default browser like Tauri, and some other that replace the back-end with something lighter-weight, even more like Tauri, but most of them are currently unmaintained, still officially alpha, or otherwise incomplete/unstable/both. Electron has the properties of being here, being stable/maintained, and being good enough until it isn't (and once it isn't, those moving off it tend to go for something else completely rather than another system very like it) - it is difficult for a newer similar projects to compete with the momentum it has when the “escape route” from it is generally to something more completely different.
If with a reasonable battery standby mode can only last a few weeks and active use is at best a few days then you might as well add a fairly beefy CPU and with a beefy CPU OS optimizations only go so far. This is why eInk devices can end up with such a noticeably longer lifespan, they now have a reason to put in a weak CPU and do some optimization because the possibility of a long lifespan is a huge potential selling point.
The library you built looks fucking awesome, by the way. However, I think even you acknowledged on the page that Matplotlib may well be good enough for many use cases. If someone knows an existing tool extremely well, any replacement needs to be a major step change to solve a problem that couldn't be solved in existing, inefficient, tools.
As a former CS major (30 years ago) that went into IT for my first career, I've wondered about bloat and this article gave me the layman explanation.
I am still blown away by the comparison of pointing out that the WEBP image of SuperMario is larger in size than the Super Mario game itself!
Volunteer-supported UNIX-like OS, e.g., NetBSD, represents the closest to this ideal for me
I am able to use an "old" operating system with new hardware. No forced "upgrades" or remotely-installed "updates". I decide when I want to upgrade software. No new software is pre-installed
This allows me to observe and enjoy the speed gains from upgrading hardware in a way I cannot with a corporate operating system. The later will usurp new hardware resources in large part for its own commercial purposes. It has business goals that may conflict with the non-commercial interests of the computer owner
It would be nice if software did not always grow in size. It happens to even the simplest of programs. Look at the growth of NetBSD's init over time for example
Why not shrink programs instead of growing them
Programmers who remove code may be the "heros", as McIlroy once suggested ("The hero is the negative coder")
It often feels to me like we’ve gone far down the framework road, and frameworks create leaky abstractions. I think frameworks are often understood as saving time, simplifying, and offloading complexity. But they come with a commitment to align your program to the framework’s abstractions. That is a complicated commitment to make, with deep implications, that is hard to unwind.
Many frameworks can be made to solve any problem, which makes things worse. It invites the “when all you’ve got is a hammer, everything looks like a nail” mentality. The quickest route to a solution is no longer the straight path, but to make the appropriate incantations to direct the framework toward that solution, which necessarily becomes more abstract, more complex, and less efficient.
Is this actually a problem, though? The blog features a section on tradeoffs, and dedicates an entire section to engineering tradeoffs. Perceived performance is one of these tradeoffs.
You complain about UI not keeping up with key strokes. As a counterexample I point out Visual Studio Code. It's UI is not as snappy as native GUI frameworks, but we have a top notch user experience that's consistent across operating systems and desktop environments. That's a win, isn't it? How many projects can make that claim?
The blog post also has a section on how a significant part of the bloat is bad.
The problem was that the front end developers involved decided to use Next.js to replace the front end of a mostly complete Django site. I think it was very much a case of someone just wanting to use what they knew regardless of whether it was a good fit - the "when all you have is a hammer, everything looks like a nail" effect.
This is specious reasoning, as "optimized" implementations typically resort to performance hacks that make code completely unreadable.
> TFA touches this when discussing complexity ("people don't understand how the entire system works"). But treats it as a separate issue. It's not. Bloat creates unknowable systems.
I think you're confusing things. Bloat and lack of a clear software architecture are not the same thing. Your run-of-the-mill app developed around a low-level GUI framework like win32 API tends to be far more convoluted and worse to maintain than equivalent apps built around high-level frameworks, including electron apps. If you develop an app into a big ball of mud, you will have a bad time figuring it out regardless of what framework you're using (or not using)
Bloat (you mean here code duplication?) can be both cause or a symptom of some maintainability problem. It's like a vicious cycle. A spaghetti code mess (not the same thing as bloat) will be prone to future bloat because developers don't know what they are doing. I mean in the bad sense. You can still be not familiar with the entire system but if the code is well organized, is reusable, modular, testable, you can still work relatively comfortably with such code and have little worries of introducing horrible regressions (in a case of a spaghetti code). You can also do refactors much easier. Meanwhile, a badly managed spaghetti code is much less testable, reusable, when developers work with such code, they often don't want to reuse an existing code, because the existing code is already fragile and not reusable. For each feature they prefer to create or duplicate a new function.
This is a vicious cycle, the code is starting to rot, becoming more and more unmaintainable, duplicated, fragile, and, very likely, inefficient. This is what I meant.
Nothing about a cross-platform UI requires that it not be snappy. Or that Electron is the best option possible.
Did VSCode do a good job with the options available? Maybe, maybe not. But the options is where I think we should focus.
Having to trade off between two bad options means you’ve already lost.
Is it a win? Why? Consistency across platforms is a branding, business goal, not an engineering one. Consistency itself doesn't specify a direction, it just makes it more familiar, and easier to adopt without effort. It's easier to sit all day, and never exercise.
It's what everybody does, or it's what everybody uses, has never translated into it being good.
Notably; the engineers I respect the most, and the ones making things that I enjoy using, none of them use vscode. I'm sure most will read this as an attack against their editor of choice, SHUN THE NON BELIEVER! But hopefully enough will realize that it's not actually an attack on them nor their editor, but instead I'm advocating for what is the best possible option, and not the easiest to use. Could they use vscode? Obviously yes, they could. They don't because the more experience you have, the easier it is to see that bloat get in the way.
The time needed from the moment you launched the game (clicked on the .exe) to the moment you entered the server (to the map view) with all assets 100% loaded was about 1 second. Literally! You click the icon on your desktop and BAM! you're already on the server and you can start shooting. But that was written by John Carmack in C :-)
From other examples - I have a "ModRetro Chromatic" at home which is simply an FPGA version of the Nintendo Game Boy. On this device, you don't see the falling "Nintendo" text with the iconic sound known from normal Game Boys. When I insert a cartridge and flip the Power switch, I'm in the game INSTANTLY. There's simply absolute zero delay here. You turn it on and you're in the game, literally just like with that Quake.
For comparison - I also have a Steam Deck, whose boot time is so long that I sometimes finish my business on the toilet before it even starts up. The difference is simply colossal between what I remember from the old days and what we have today. On old Windows 2000, everything seemed lighter than on modern machines. I really miss that.
I'm saying: those same layers create a different maintainability problem that TFA ignores. When you stack framework on library on abstraction, you create systems nobody can hold in their head. That's a real cost.
You can have clean architecture and still hit this problem. A well-designed 17-layer system is still 17 layers of indirection between "user clicks button" and "database updates.