←back to thread

602 points hhutw | 6 comments | | HN request time: 0.001s | source | bottom
1. snvzz ◴[] No.45640659[source]
Fools admire complexity.
replies(2): >>45640922 #>>45644448 #
2. 9dev ◴[] No.45640922[source]
There’s complication, and there’s complexity. Fools admire complication, engineers design solutions to complex problems. This is a diagram explaining the latter.
replies(1): >>45641170 #
3. KronisLV ◴[] No.45641170[source]
I think it was put pretty well by describing things as accidental complexity (of which you want as little as possible) and essential complexity, which is inherent to the problem domain that you're working with and which there is no way around for.

The same thing could sometimes fall into different categories as well - like going for a microservices architecture when you need to serve about 10'000 clients in total vs at some point actually needing to serve a lot of concurrent requests at any given time.

replies(1): >>45642531 #
4. matu3ba ◴[] No.45642531{3}[source]
> inherent to the problem domain that you're working with and which there is no way around for

I'd phrase it to reasonable taken trade-offs for customer/user support and/or selling products.

> going for a microservices architecture when you need to serve about 10'000 clients

So far I am only aware of the use case to ship/release fast at cost of technical debt (non-synchronized master) of microservices. As I understand it, this is to a large degree due to git shortcomings and no more efficient/scalable replacement solution being in sight. Or can you explain further use cases with technical necessity?

replies(1): >>45648563 #
5. yjftsjthsd-h ◴[] No.45644448[source]
Where/how would you simplify it without losing features?
6. KronisLV ◴[] No.45648563{4}[source]
1. When you have a system where each component has significant complexity, to the point where if it was a monolith you'd have a 1M SLoC codebase that would be slow to work with - slow to compile, slow to deploy, slow to launch and would eat a lot of resources. At that point chances are that the business domain is so large that splitting up into smaller pieces wouldn't be out of the question.

2. Sometimes when you have vastly different workloads, like a bunch of parts of the system (maybe even the majority) that is just a CRUD and a small part that needs to deal with message queues, or digitally sign documents, or generate reports or do any PDF/Word/Excel/whatever processing. You could do this with a modular monolith but sometimes it's better to keep the dependencies of your project clean, especially in the case of subpar libraries that you have to use, so you at least can put all of the bullshit in one place. Also applies in cases where the load/stability of that one eccentric part is under question.

3. The tech stack might also differ a whole bunch of some of those use cases, for example if you need to process all sorts of binary data formats (e.g. satellite data) or do specific kinds of number crunching, or interact with LLMs, a lot of the time Python will be a really good solution for this, while that's not what you might be using in your stack throughout. So you might pick the right tool for the job and keep it as a separate service.

4. The good old org chart, sometimes you'll just have different teams with different approaches and will be working on different parts of the business domain - you already said that, but Conway's law is very much a thing that'd be silly to fight against all that much, because then you'd end up with an awkward monorepo, the tooling to make working which easy might just not be available to you.