←back to thread

50 points senfiaj | 8 comments | | HN request time: 0.201s | source | bottom
1. adamzwasserman ◴[] No.45811656[source]
TFA lists maintainability as a benefit of bloat ("modularity, extensibility, code patterns make it easier to maintain"). Completely ignores how bloat harms maintainability by making code unknowable.

Stack enough layers - framework on library on abstraction on dependency - and nobody understands what the system does anymore. Can't hold it in your head. Debugging becomes archaeology through 17 layers of indirection. Features work. Nobody knows why. Nobody dares touch them.

TFA touches this when discussing complexity ("people don't understand how the entire system works"). But treats it as a separate issue. It's not. Bloat creates unknowable systems. Unknowable systems are unmaintainable by definition.

The "developer time is more valuable than CPU cycles" argument falls apart here. You're not saving time. You're moving the cost. The hours you "saved" pulling in that framework? You pay them back with interest every time someone debugs a problem spanning six layers of abstraction they don't understand

replies(3): >>45811849 #>>45812998 #>>45813114 #
2. frisbee6152 ◴[] No.45811849[source]
A well-optimized program is often a consequence of a deep understanding of the problem domain, good scoping, and mindfulness.

It often feels to me like we’ve gone far down the framework road, and frameworks create leaky abstractions. I think frameworks are often understood as saving time, simplifying, and offloading complexity. But they come with a commitment to align your program to the framework’s abstractions. That is a complicated commitment to make, with deep implications, that is hard to unwind.

Many frameworks can be made to solve any problem, which makes things worse. It invites the “when all you’ve got is a hammer, everything looks like a nail” mentality. The quickest route to a solution is no longer the straight path, but to make the appropriate incantations to direct the framework toward that solution, which necessarily becomes more abstract, more complex, and less efficient.

replies(2): >>45812116 #>>45813549 #
3. adamzwasserman ◴[] No.45812116[source]
I completely agree. This is the point I make here: https://hackernoon.com/framework-or-language-get-off-my-lawn...
4. locknitpicker ◴[] No.45812998[source]
> Stack enough layers - framework on library on abstraction on dependency - and nobody understands what the system does anymore.

This is specious reasoning, as "optimized" implementations typically resort to performance hacks that make code completely unreadable.

> TFA touches this when discussing complexity ("people don't understand how the entire system works"). But treats it as a separate issue. It's not. Bloat creates unknowable systems.

I think you're confusing things. Bloat and lack of a clear software architecture are not the same thing. Your run-of-the-mill app developed around a low-level GUI framework like win32 API tends to be far more convoluted and worse to maintain than equivalent apps built around high-level frameworks, including electron apps. If you develop an app into a big ball of mud, you will have a bad time figuring it out regardless of what framework you're using (or not using)

replies(2): >>45813940 #>>45814369 #
5. senfiaj ◴[] No.45813114[source]
I mean there are different kinds of bloats. Some is justifiable, some is not and some is just a symptom of other problems (the last 2 are not mutually exclusive), like mismanagement, incompetence (from management, developers, team leads, etc). This is somewhat similar to cholesterol, there are different types of cholesterol, some might be really bad, some might be harmless, etc.

Bloat (you mean here code duplication?) can be both cause or a symptom of some maintainability problem. It's like a vicious cycle. A spaghetti code mess (not the same thing as bloat) will be prone to future bloat because developers don't know what they are doing. I mean in the bad sense. You can still be not familiar with the entire system but if the code is well organized, is reusable, modular, testable, you can still work relatively comfortably with such code and have little worries of introducing horrible regressions (in a case of a spaghetti code). You can also do refactors much easier. Meanwhile, a badly managed spaghetti code is much less testable, reusable, when developers work with such code, they often don't want to reuse an existing code, because the existing code is already fragile and not reusable. For each feature they prefer to create or duplicate a new function.

This is a vicious cycle, the code is starting to rot, becoming more and more unmaintainable, duplicated, fragile, and, very likely, inefficient. This is what I meant.

6. ElevenLathe ◴[] No.45813549[source]
The main point of the framework is to keep developers interchangeable, and therefore suppress wages. All mature industries have things like this: practices that aren't "optimal" (in a very narrow sense), but being standardized means that through competition and economies of scale they are still cheaper than the alternative, better-in-theory solution.
7. adamzwasserman ◴[] No.45813940[source]
I'm not advocating for unreadable optimization hacks. I'm working within TFA's own framework. TFA argues that certain bloat (frameworks, layers, abstractions) is justified because it improves maintainability through "modularity, extensibility, code patterns."

I'm saying: those same layers create a different maintainability problem that TFA ignores. When you stack framework on library on abstraction, you create systems nobody can hold in their head. That's a real cost.

You can have clean architecture and still hit this problem. A well-designed 17-layer system is still 17 layers of indirection between "user clicks button" and "database updates.

8. gwbas1c ◴[] No.45814369[source]
> This is specious reasoning, as "optimized" implementations typically resort to performance hacks that make code completely unreadable.

That really depends on context, and you're generalizing based on assumptions that don't hold true:

Replacing bloated ORM code with hand-written SQL can be significantly more readable if it boils down to a simple query that returns rows that neatly map to objects. It could also boil down to a very complicated, hard to follow query that requires gymnastics to populate an object graph.

The same can be said for optimizing CPU usage. It might be a case of removing unneeded complexity, or it could be a case of microoptimizations that require unrolling loops and copy & paste code.

---

I should point out that I've lived the ORM issue: I removed an ORM from a product and it became industry-leading for performance, and the code was so clean that newcomers would compliment me on how easy it was to understand data access. In contrast, the current product that I work on is a clear example of when an ORM is justified.

I've also lived the CPU usage issue: I had to refactor code that was putting numeric timestamps into strings, and then had complicated code that would parse the strings to perform math on the timestamps. The refactor involved replacing the strings with a defined type. Not only was it faster, the code was easier to follow because the timestamps were well encapsulated.