Most active commenters
  • tonyarkles(3)
  • devnullbrain(3)

←back to thread

1070 points dondraper36 | 51 comments | | HN request time: 1.064s | source | bottom
Show context
codingwagie ◴[] No.45069135[source]
I think this works in simple domains. After working in big tech for a while, I am still shocked by the required complexity. Even the simplest business problem may take a year to solve, and constantly break due to the astounding number of edge cases and scale.

Anyone proclaiming simplicity just hasnt worked at scale. Even rewrites that have a decade old code base to be inspired from, often fail due to the sheer amount of things to consider.

A classic, Chesterton's Fence:

"There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”"

replies(44): >>45069141 #>>45069264 #>>45069348 #>>45069467 #>>45069470 #>>45069871 #>>45069911 #>>45069939 #>>45069969 #>>45070101 #>>45070127 #>>45070134 #>>45070480 #>>45070530 #>>45070586 #>>45070809 #>>45070968 #>>45070992 #>>45071431 #>>45071743 #>>45071971 #>>45072367 #>>45072414 #>>45072570 #>>45072634 #>>45072779 #>>45072875 #>>45072899 #>>45073114 #>>45073174 #>>45073183 #>>45073201 #>>45073291 #>>45073317 #>>45073516 #>>45073758 #>>45073768 #>>45073810 #>>45073812 #>>45073942 #>>45073964 #>>45074264 #>>45074642 #>>45080346 #
1. sodapopcan ◴[] No.45070127[source]
This is the classic misunderstanding where software engineers can't seem to communicate well with each other.

We can even just look at the title here: Do the simplest thing POSSIBLE.

You can't escape complexity when a problem is complex. You could certainly still complicate it even more than necessary, though. Nowhere in this article is it saying you can avoid complexity altogether, but that many of us tend to over-complicate problems for no good reason.

replies(7): >>45070394 #>>45070713 #>>45072375 #>>45072947 #>>45073130 #>>45074955 #>>45079503 #
2. lll-o-lll ◴[] No.45070394[source]
> We can even just look at the title here: Do the simplest thing POSSIBLE.

I think the nuance here is that “the simplest thing possible” is not always the “best solution”. As an example, it is possible to solve very many business or operational problems with a simple service sitting in front of a database. At scale, you can continue to operate, but the amount of man-hours going into keeping the lights on can grow exponentially. Is the simplest thing possible still the DB?

Complexity is more than just the code or the infrastructure; it needs to run the entire gamut of the solution. That includes looking at the incidental complexity that goes into scaling, operating, maintaining, and migrating (if a temporary ‘too simple but fast to get going’ stack was chosen).

Measure twice, cut once. Understand what you are trying to build, and work out a way to get there in stages that provide business value at each step. Easier said than done.

Edit: Replies seem to be getting hung up over the “DB” reference. This is meant to be a hypothetical where the reader infers a scenario of a technology that “can solve all problems, but is not necessarily the best solution”. Substitute for “writing files to the file system” if you prefer.

replies(8): >>45070526 #>>45070559 #>>45070597 #>>45070639 #>>45070889 #>>45070897 #>>45072898 #>>45073722 #
3. tbrownaw ◴[] No.45070526[source]
Consider for example, computerizing a currently-manual process. And the 80/20 rule.

Do you handle one "everything is perfect" happy path, and use a manual exception process for odd things?

Do you handle "most" cases, which is more tech work but shrinks the number of people you need handling one-off things?

Or do you try to computerize everything no matter how rare?

replies(1): >>45070928 #
4. qaq ◴[] No.45070559[source]
Is the simplest thing possible still the DB? Yes thats why google spent decent amount of resources building out spanner because for many biz domains even at hyper scale it's still the DB.
5. XorNot ◴[] No.45070597[source]
I have worked at too many companies where the effort spent not using a simple database was an exponential drag on everything.

Hell I just spent a week doing something which should've taken 5 minutes because rather then a settings database, someone has just been maintaining a giant ball of copy+pasted terraform code instead.

replies(1): >>45073504 #
6. sodapopcan ◴[] No.45070639[source]
Right, and again this is reading too much into it. The simplest thing possible does not mean the best solution. If your solution that worked really well yesterday no longer scales today, it's no longer the correct solution and will require a more complex one.
replies(2): >>45071988 #>>45072897 #
7. hammock ◴[] No.45070713[source]
Yes. I like to distinguish between “complex” (by nature) and “complicated” (by design)
replies(1): >>45070895 #
8. quietbritishjim ◴[] No.45070889[source]
> At scale, you can continue to operate, but the amount of man-hours going into keeping the lights on can grow exponentially. Is the simplest thing possible still the DB?

Don't worry, the second half of the title has this covered:

> ... that could possibly work

In the scenario you've described, the technology is not working, in the complete sense including business requirements of reasonable operating costs.

Perhaps it really did work at first, in the complete sense, when the number of users was quite small. That's where the actual content of the article kicks in: it suggests you really do use that simple solution, because maybe you'll never need to scale after all, or you'll need to rewrite everything by then anyway, or you'll have access to more engineering talent by then, etc. I'd tend to agree, but with the caveat that you should feel free to break the rule so long as you're doing it consciously. But none of that implies that you should end up in the situation you described.

replies(2): >>45071232 #>>45073767 #
9. zdragnar ◴[] No.45070895[source]
The distinction you make is known to me as natural complexity (the base level due to the nature of the domain) and accidental complexity (that which is added unnecessarily on top of it).

Your definition rubs up against what a UX designer taught me years ago, which is that simple and complex are one spectrum, similar to but different from easy and hard.

Often, simple is confused for easy, and complex for hard. However, simple interfaces can hide a lot of information in unintuitive ways, while complex interfaces can present more information and options up front.

replies(1): >>45072993 #
10. neonrider ◴[] No.45070897[source]
> I think the nuance here is that “the simplest thing possible” is not always the “best solution”.

The programmer's mind is the faithful ally of the perfect in its war waged against the good enough.

The "best" solution for most people that have a problem is the one they can use right now.

replies(1): >>45073305 #
11. tonyarkles ◴[] No.45070928{3}[source]
My favourite example of this from my own career... automating timesheet -> payroll processing in a unionized environment. As we're converting the collective bargaining agreement into code, we discover that there are a pair of rules that seem contradictory. Go talk to someone in the payroll department to try to figure out how it's handled. Get an answer that makes decent sense, but have a bit of a lingering doubt about the interpretation. Talk to someone else in the same department... they tell us the alternative interpretation.

Bring the problem back to our primary contact and they've got no clue what to do. They're on like year 2 of a 7 year contract and they've just discovered that their payroll department has been interpreting the ambiguous rules somewhat randomly. No one wants to commit to an interpretation without a memorandum of understanding from the union, and no one wants to start the process of negotiating that MoU because it's going to mean backdating 2 years of payroll for an unknown number of employees, who may have been affected by it one month but not the next, depending on who processed their paystub that month.

That was fun :D

replies(2): >>45076449 #>>45076856 #
12. lll-o-lll ◴[] No.45071232{3}[source]
> Perhaps it really did work at first, in the complete sense, when the number of users was quite small. That's where the actual content of the article kicks in: it suggests you really do use that simple solution, because maybe you'll never need to scale after all, or you'll need to rewrite everything by then anyway, or you'll have access to more engineering talent by then, etc.

This is where I am arguing nuance. These decisions are contextual; and the superficially more complicated solution may be solving inherent complexity in the problem space that only provides benefit over a time period.

As an example, some team might decide to forgo a database and read/write directly to the file system. This may enable a release in less time and that might be the right decision in certain contexts. Or it could be a terrible decision as the externalised costs begin to manifest and the business fails because of loss of customer trust.

My point is that you cannot only look at what is right in front of you, you also need to tactically plan ahead. In the big org context, you also need to strategically plan ahead.

13. achierius ◴[] No.45071988{3}[source]
But sometimes it IS better to think a few steps ahead, rather than building a new system from scratch every time things scale up. It's not always easy to upgrade things incrementally: just look at IPv4 vs IPv6
replies(8): >>45072124 #>>45072267 #>>45072373 #>>45072515 #>>45072559 #>>45072870 #>>45074205 #>>45078662 #
14. oivey ◴[] No.45072124{4}[source]
It can be hard enough to fix things when some surprise happens. Unwinding complicated “future proof” things on top of that is even worse. The simpler something is, the less you hopefully have to throw away when you inevitably have to.
15. fruitplants ◴[] No.45072267{4}[source]
I agree with thinking a few steps ahead. It is particularly useful in case of complex problems or foundational systems.

Also maybe simplicity is sometimes achieved AFTER complexity, anyway. I think the article means a solution that works now... target good enough rather than perfect. And the C2 wiki (1) has a subtitle '(if you're not sure what to do yet)'. In a related C2 wiki entry (2) Ward Cunningham says: Do the easiest thing that could possibly work, and then pound it into the simplest thing that could possibly work.

IME a lot of complexity is due to integration (in addition to things like scalability, availability, ease of operations, etc.) If I can keep interfaces and data exchange formats simple (independent, minimal, etc.) then I can refactor individual systems separately.

1. https://wiki.c2.com/?DoTheSimplestThingThatCouldPossiblyWork

2. https://wiki.c2.com/?SimplestOrEasiest

16. baxtr ◴[] No.45072373{4}[source]
Yes sometimes. But how can you know beforehand? It’s clear in hindsight, for sure.

The most fundamental issue I have witnessed with these things is that people have a very hard time taking a balanced view.

For this specific problem, should we invest in a more robust solution which takes longer to build or should we just build a scrappy version and then scale later?

There is no right or wrong. It’s depends heavily on the context.

But, some people, especially developers I am afraid, only have one answer for every situation.

17. motorest ◴[] No.45072375[source]
> We can even just look at the title here: Do the simplest thing POSSIBLE.

I think you're focusing on weasel words to avoid addressing the actual problem raided by OP, which is the elephant in the room.

Your limited understanding of the problem domain doesn't mean the problem has a simple or even simpler solution. It just means you failed to understand the needs and tradeoffs that led to complexity. Unwittingly, this misunderstanding originates even more complexity.

Listen, there are many types of complexity. Among which there is complexity intrinsic to the problem domain, but there is also accidental complexity that's needlessly created by tradeoffs and failures in analysis and even execution.

If you replace an existing solution with a solution which you believe is simpler, odds are you will have to scramble to address the impacts of all tradeoffs and oversights in your analysis. Addressing those represents complexity as well, complexity created by your solution.

Imagine a web service that has autoscaling rules based on request rates and computational limits. You might look at request patterns and say that this is far too complex, you can just manually scale the system with enough room to handle your average load, and when required you can just click a button and rescale it to meet demand. Awesome work, you simplified your system. Except your system, like all web services, experiences seasonal request patterns. Now you have schedules and meetings and even incidents that wake up your team in the middle of the night. Your pager fires because a feature was released and you didn't quite scaled the service ro accommodate for the new peak load. So now your simple system requires a fair degree of hand holding to work with any semblance of reliability. Is this not a form of complexity as well? Yes, yes it is. You didn't eliminated complexity, it is only shifted to another place. You saw complexity in autoscaling rules and believed you eliminated that complexity by replacing it with manual scaling, but you only ended up shifting that complexity somewhere else. Why? Because it's intrinsic to the problem domain, and requiring more manual work to tackle that complexity introduces more accidental complexity than what is required to address the issue.

replies(1): >>45074818 #
18. twbarr ◴[] No.45072515{4}[source]
IPv6 is arguably a good example of what happens when you don't do the simplest thing possible. What we really needed was a bigger IP address space. What we got was a whole bunch of other crap. If we had literally expanded IPv4 by a couple of octets at the end (with compatible routing), would we be there now?
replies(2): >>45073247 #>>45073948 #
19. tonyedgecombe ◴[] No.45072559{4}[source]
>But sometimes it IS better to think a few steps ahead

The trouble is by the time you get there you will discover the problem isn't what you expected and it will all have been wasted effort.

https://en.wikipedia.org/wiki/You_aren't_gonna_need_it

20. lelanthran ◴[] No.45072870{4}[source]
> But sometimes it IS better to think a few steps ahead, rather than building a new system from scratch every time things scale up.

The problem is knowing when to do it and when not to do it.

If you're even the slightest bit unsure, err on the side of not thinking a few steps ahead because it is highly unlikely that you can see what complexities and hurdles lie in the future.

In short, it's easier to unfuck an under engineered system than an over engineered one.

replies(2): >>45076667 #>>45078684 #
21. fauigerzigerk ◴[] No.45072897{3}[source]
The slogan is unhelpful because the cost of failure cannot be factored into the meaning of "working".

"could possibly work" is clearly hyperbole as it would only exclude solutions that are guaranteed to fail.

But even under a more plausible interpretation, this slogan ignores the cost of failure as an independent justification for adding complexity.

It's bad advice.

22. mattlutze ◴[] No.45072898[source]
No pop psychology maxim is universally true. However in your example we're presented with the outdated understanding of "tech debt."

> As an example, it is possible to solve very many business or operational problems with a simple service sitting in front of a database.

If this is the simplest approach within the problem space or business's constraints, and meets the understood needs, it may indeed be the right choice.

> At scale, you can continue to operate, but the amount of man-hours going into keeping the lights on can grow exponentially. Is the simplest thing possible still the DB?

No problem in a dynamic human system can be solved statically and left alone. If the demands on a solution grows, and the problem space or business's needs changes, then the solution should be reassessed and the new conditions solved for.

Think of it alternatively as resource-constrained work allocation, or agile problem solving. If we don't have enough labor available (and we rarely do) to solve everything "best," then we need to draw a line. Decades of practice now have shown that it's a crap shoot to guess at the shape of levels of complexity down the road.

Best case you spend time that could go into something else valuable today, to solve a problem for a year from now; worst case you get the assumptions wrong and fail to solve that second "today" problem as well as still needing to spend future time on refactoring.

23. imgabe ◴[] No.45072947[source]
A complex system that works is always found to have evolved from a simple system that worked.

You can keep on doing the simplest thing possible and arrive at something very complex, but the key is that each step should be simple. Then you are solving a real problem that you are currently experiencing, not introducing unnecessary complexity to solve a hypothetical problem you imagine you might experience.

replies(1): >>45089116 #
24. saghm ◴[] No.45072993{3}[source]
To me, the benefit of simplicity is that it can help avoid the need to try to guess what future requirements will be by leaving room for iteration. Making something complex up front often increases the burden when trying to change things in the future. Crucially, this requires being flexible to making those changes in the future though rather than letting the status quo remain indefinitely.

The main argument I've seen against this strategy of design is concern over potentially needing to make breaking changes, but in my experience, it tends to be a lot easier to try to come up with a simple design that can solve most of the common cases but leaves design space for future work to solve more niche cases that wouldn't require breaking the existing functionality than trying to anticipate every possible case up front. After a certain point, our confidence in our predictions dips low enough that I think it's smarter to bet on your ability to avoid locking yourself into a choice that would break things to change later than to make the correct choice based on those predictions.

25. jbreckmckye ◴[] No.45073130[source]
I think you're accidentally conducting a motte and bailey fallacy here.

It's making an ambitious risky claim (make things simpler than you think they need to be) then retreating on pushback to a much safer claim (the all-encompassing "simplest thing possible")

The statement ultimately becomes meaningless because any interrogation can get waved away with "well I didn't mean as simple as that."

But nobody ever thinks their solution is more complex than necessary. The hard part is deciding what is necessary, not whether we should be complex.

replies(1): >>45073666 #
26. Sesse__ ◴[] No.45073247{5}[source]
That “with compatible routing” thing pulls a lot of weight… I mean, if you have literal magic, then sure.

Apart from that, IPv6 _is_ IPv4 with a bigger address space. It's so similar it's remarkable.

27. mpweiher ◴[] No.45073305{3}[source]
And in the context of XP, which is where DTSTTCPW comes from:

The one you can use right now in order to get feedback from real world use, which will be much better at guiding you in improving the solution than what you thought was "best" before you had that feedback.

Real world feedback is the key. Get there as quickly as feasible, then iterate with that.

28. fmbb ◴[] No.45073504{3}[source]
A giant ball of copypasted Terraform is not the simplest thing that could possibly work.

Adding the runtime complexity and maintenance work for a new database server is not a small decision.

replies(1): >>45074359 #
29. PickledJesus ◴[] No.45073666[source]
Thank you, I was trying to put a finer point on what I disagreed with in that comment but that's better than I'd have done. It's like saying "just pick the best option"
30. strogonoff ◴[] No.45073722[source]
The apparent discrepancy between “the simplest thing possible” and “the best solution” only exists if we forget that a product exists in time. If the goal is not just an app that works today but something that won’t break tomorrow, that changes what the simplest thing is. If what seems like the simplest thing makes it difficult to maintain, have many poorly vetted dependencies, etc., then that is not really the simplest thing anymore.

When this is accounted for, “the simplest thing” approaches “the best solution”.

31. devnullbrain ◴[] No.45073767{3}[source]
Then the title just means 'do the right thing' and has no value.
replies(1): >>45075349 #
32. xorcist ◴[] No.45073948{5}[source]
In a place with even less IPv6 adoption, probably. It's not like there wasn't similar proposals discussed, and there's no need to rehash the exact same discussion again.

The problem quickly becomes "how do you route it", and that's where we end up with something like today's IPv6. Route aggregation and PI addresses is impratical with IPv4 + extra bits.

The main changes from v4 to v6 besides the extra bits is mostly that some unnecessary complexity was dropped, which in the end is net positive for adoption.

33. kmacdough ◴[] No.45074205{4}[source]
IPv4 vs IPv6 seems like a great example for why to keep it simple. Even given decades to learn from the success of IPv4 and almost a decade in design and refinement, IPv6 has flopped hard, not so much because of limitations of IPv4, but because IPv6 isn't backwards compatable and created excessive hardware requirements that basically require an entirely parallel IPv6 routing infrastructure to be maintained in addition to IPv4 infrastructure which isn't going away soon. It solved too far ahead for problems we aren't having.

As is IPv4s simplicity got us incredibly far and it turns out NAT and CIDR have been quite effective at alleviating address exhaustion. With some address reallocation and future protocol extensions, its looking entirely possible that a successor was never needed.

34. ◴[] No.45074359{4}[source]
35. sigseg1v ◴[] No.45074818[source]
Well said.

An example I encountered was someone taking the "KISS" approach to enterprise reporting and ETL requirements. No need to make a layer between their data model and what data is given to the customers, and no need to make a separate replica of the server or db to serve these requests, as those would be complex.

This failed in so many ways I can't count. The system instantly became deeply ingrained in all customer workflows, but they connected via PowerBI via hundreds of non-technical users with bespoke reports. If an internal column name changed or structure of the data model changed so that devs can evolve the platform, users just get a generic error about Query Failed and lit up the support team. Technical explanations about needing to modify their query were totally not understood by the end users and they just want the dev team to fix it. Also no concern in any way for pagination, request complexity limiting, indexes, request rate limiting, etc was considered because those were not considered simple. But those can not be added without breaking changes because a non-tech user will not understand what to do when their report in Excel gets a rate limit on 29 of the 70 queries they launch per second. No concerns about taking prod OLTP databases down with OLAP workflows overloading them.

All in all that system was simple and took about 2 weeks to build, and was rapidly adopted into critical processes, and the team responsible left. It took the remaining team members a bit over 2 years to fix it by redesigning it and hand holding non-technical users all the way down to fixing their own Excel sheets. It was a total nightmare caused by wanting to keep things simple when really this needed: heavy abstraction models, database replicas, infrastructure scaling, caching, rewriting lots of application logic to make data presentable where needed, index tuning, automated generation of large datasets for testing, building automated tests for load testing, release process management, versioning strategies, documentation and communication processes, depreciation policies. They thought that we could avoid months of work and keep it simple and instead caused years of mess because making breaking changes is extremely difficult once you have wide adoption.

replies(1): >>45075434 #
36. MetaWhirledPeas ◴[] No.45074955[source]
Exactly.

And to address something the GP said:

> I am still shocked by the required complexity

Some of this complexity becomes required through earlier bad decisions, where the simplest thing that could possibly work wasn't chosen. Simplicity up front can reduce complexity down the line.

37. quietbritishjim ◴[] No.45075349{4}[source]
No, it means don't (usually) over engineer a solution for a larger scale than you can be sure you'll need. If you don't see the value in that then you haven't worked with enough junior developers!
replies(1): >>45078665 #
38. gr4vityWall ◴[] No.45075434{3}[source]
While I tend to agree with your position, it sounds like they built a system in less than 2 weeks that was immediately useful to the organization. That sounds like a win to me, and makes me wonder if there were other ways in hindsight that such a system could evolve.

>They thought that we could avoid months of work and keep it simple and instead caused years of mess because making breaking changes is extremely difficult once you have wide adoption.

Right. Do you think a middle ground was possible? Say, a system that took 1 month to build instead of two weeks, but with a few more abstractions to help with breaking changes in the future.

Thanks for sharing your experience btw, always good to read about real world cases like this from other people.

replies(1): >>45076245 #
39. motorest ◴[] No.45076245{4}[source]
> While I tend to agree with your position, it sounds like they built a system in less than 2 weeks that was immediately useful to the organization. That sounds like a win to me, and makes me wonder if there were other ways in hindsight that such a system could evolve.

I don't think this is an adequate interpretation. Quick time to market doesn't mean the half-baked MVP is the end result.

An adequate approach would be to include work on introducing the missing abstraction layer as technical debt to be paid right after launch. You deliver something that works in 2 weeks and then execute the remaining design as follow-up work. This is what technical debt represents, and why the "debt" analogy fits so well. Quick time to market doesn't force anyone to put together half-assed designs.

40. mcny ◴[] No.45076449{4}[source]
So effectively the company was stealing people's pay
replies(1): >>45087012 #
41. sevensor ◴[] No.45076667{5}[source]
The best way to think a few steps ahead is to make as much of your solution disposable as possible. I optimize for ease of replacement over performance or scalability. This means that my operating assumption is that everything I’m doing is a mistake, so it’s best to work from a position of being able to throw it out and start over. The result is that I spend a lot of time thinking about where the seams are and making them as simple as possible to cut.
42. scarface_74 ◴[] No.45076856{4}[source]
Wouldn’t the simplest thing possible in that case probably just use one of the many SaaS payroll services? If the second largest employer in the US can use ADP, I’m almost sure your company could.
replies(1): >>45086972 #
43. yazantapuz ◴[] No.45078662{4}[source]
But a 128 bit identifier maybe was not the best choice when ipv4 was in the works... maybe 64?
44. devnullbrain ◴[] No.45078665{5}[source]
Well hold on, we're going in circles here

>In the scenario you've described, the technology is not working, in the complete sense including business requirements of reasonable operating costs.

In the parent comment's reasaonble premise, they wouldn't be sure of what they would need.

45. devnullbrain ◴[] No.45078684{5}[source]
Intel followed this strategy with the mobile market to what is apparently terminal fucking.
replies(1): >>45080862 #
46. sgjohnson ◴[] No.45079503[source]
The title is not “Do the simplest thing POSSIBLE”. It’s do the “Simplest thing that could POSSIBLY work”.

There’s a HUGE difference between the simplest thing possible, and the simplest thing that could possibly work.

The simplest thing that could possibly work conveniently lets you forget about the scale. The simplest thing possible does not.

47. lelanthran ◴[] No.45080862{6}[source]
> Intel followed this strategy with the mobile market to what is apparently terminal fucking.

And they followed the alternative with Itanium, and look how that turned out.

48. tonyarkles ◴[] No.45086972{5}[source]
I left out some details, it wasn’t only payroll, there was some other staff management aspects to it. But overall the answer about using ADP for this particular situation is: no.

Not strictly for technical reasons, but definitely for political reasons. The client was potentially the largest organization in my province (state-run healthcare). Outsourcing payroll and scheduling with the potential of breaking the rules in the contracts with the multiple union stakeholders was a completely non-starter. Plus the idea of needing to do lay offs within the payroll department was pretty unpalatable.

49. tonyarkles ◴[] No.45087012{5}[source]
Heh, to make it more fun… it wasn’t actually clear if they were overpaying or underpaying. Underpaying is actually a lot easier to deal with than overpaying. If you underpay someone, the easy solution is to write a cheque and include interest and/or some other form of compensation for the error.

If you overpay someone… getting that money back is a challenge.

To make it more complicated still, there was an element of “we’re not sure if we overpaid or underpaid” but there was also an element of “we gave person X an overtime shift but person Y was entitled to accept or deny that shift before person X would have even had an opportunity to take it”. That’s even harder to compensate for.

replies(1): >>45087401 #
50. mcny ◴[] No.45087401{6}[source]
Thank you for the reply. I was only commenting that wage theft is still wage theft even when there is no malicious intent. Clearly, reality is much more nuanced.
51. codethief ◴[] No.45089116[source]
> the key is that each step should be simple

In other words, every time you optimize only locally and in a single dimension and potentially walk very far away from a global optimum. I have worked on such systems before. Every single step in and by itself was simpler (and also faster, less work) than doing a refactoring (to keep the overall resulting system simple), so we never dared doing the latter. Unfortunately, over time this meant that every new step would incur additional costs due to all the accidental complexity we had accumulated. Time to finally refactor and do things the right way, right? No. Because the costs of refactoring had also kept increasing with every additional step we took, and every feature we patched on. At some point no one really understood the whole system anymore. So we just kept on piling things on top of each other and prayed they would never come crashing down on us.

Then one day, business decided the database layer needed to be replaced for licensing reasons. Guess which component had permeated our entire code base because we never got around doing that refactoring and never implemented proper boundaries and interfaces between database, business and view layer. So what could have been a couple months of migration work, ended up being more than four years of work (of rewriting the entire application from scratch).