←back to thread

1070 points dondraper36 | 3 comments | | HN request time: 0.714s | source
Show context
spectraldrift ◴[] No.45069471[source]
I agree with the spirit of the article, but I think the definition of "simple" has been inverted by modern cloud infrastructure. The examples create a false choice between a "simple but unscalable" system and a "complex but scalable" one. That is rarely the trade-off today.

The in-memory rate-limiting example is a perfect case study. An in-memory solution is only simple for a single server. The moment you scale to two, the logic breaks and your effective rate limit becomes N × limit. You've accidentally created a distributed state problem, which is a much harder issue to solve. That isn't simple.

Compare that to using a managed service like DynamoDB or ElastiCache. It provides a single source of truth that works correctly for one node or a thousand. By the author's own definition that "simple systems are stable" and require less ongoing work, the managed service is the fundamentally simpler choice. It eliminates problems like data loss on restart and the need to reason about distributed state.

Perhaps the definition of "the simplest thing" has just evolved. In 2025, it's often not about avoiding external dependencies. You will often save time by leveraging battle-tested managed services that handle complexity and scale on your behalf.

replies(1): >>45069755 #
1. dasil003 ◴[] No.45069755[source]
I don't think this is particular to cloud infrastructure. Even on a single server you could make the same argument about using flat file vs sqlite vs postgres for storage. Yes, there is a lot of powerful and reusable software, both managed and unmanaged, with good abstractions and great power to weight ratios where you pay a very small complexity cost for an incredible amount of capability. Such is the nature of software.

But all of it comes with tradeoffs and you have to apply judgement. Just as it would be foolish to write almost anything these days in assembly, I think it would be almost as foolish to just default to a managed Amazon service because it scales without considering whether A) you actually need that scale and B) there are other concerns considerations as to why that service might not be the best technical fit (in particular, I've heard regrets due to overzealous adoption of DynamoDB on more than one occasion).

replies(1): >>45070077 #
2. spectraldrift ◴[] No.45070077[source]
You make a good point about experience. I've noticed an interesting paradox there.

The engineers who most aggressively advocate for bespoke solutions in the name of "simplicity" often have the least experience with their managed equivalents, which can lead to the regrets you mentioned. Conversely, many engineers who only know how to use managed services would struggle to build the simple, self-contained solution the author describes. True judgment requires experience with both worlds.

This is also why I think asking "do we actually need this scale?" is often the wrong question; it requires predicting the future. Since most solutions work at a small scale, a better framework for making a trade-off is:

* Scalability: Will this work at a higher scale if we need it to?

* Operations: What is the on-call and maintenance load?

* Implementation: How much new code and configuration is needed?

For these questions, managed services frequently have a clear advantage. The main caveat is cost-at-scale, but that’s a moot point in the context of the article's argument.

replies(1): >>45073825 #
3. sethammons ◴[] No.45073825[source]
I boil it down to:

How will this scale? How will this fail?

I like to be able to answer these questions from designs down to code reviews. If you hit a bottleneck or issue, how will you know?