←back to thread

52 points birdculture | 4 comments | | HN request time: 0s | source
1. sunrunner ◴[] No.46182079[source]
I've been writing up a similar piece for my own personal blog (though as much to collect my own thoughts on this) that touches on this idea, particularly as it applies to shared code/modules, re-usable components in general, and also any kind of templater or builder-type tool, and the costs of over-eager abstraction, sharing and re-use, and when (if ever) to pivot to get a net positive result.

As it's only a draft piece at the moment I'll lay out some of the talking points:

- All software design and structure decisions have trade-offs (no value without some kind of cost, we're really shifting what or where the cost is to a place we find acceptable)

- 'Dont Repeat Yourself' as a principle taught as good engineering practice and why you should think about repeating yourself; don't take social proof or appeal to authority-type arguments without solid experience

- There is a difference between things that are actually the same (or should be for consistency (such as domain facts, knowledge) versus ones that happen to be the same at the time of creation but are only that way by coincidence

- Effective change almost always (if not always always) comes from actual, specific use-cases; a reusable component not derived from these cases cannot show these

- Re-usable components themselves are not necessarily deployed or actually used, so by definition can't drive their own growth

- If they are deployed, it's N+1 things to maintain, and if you can't maintain N how are you going to maintain N+1?

- The costs of creation and ongoing maintenance - quite simply there's a cost to doing it and doing it well, and if it costs more to develop than the value gained then it's a net loss

- Components/modules that are used in the same places their use cases are get naturally tested and have specific use-cases; taking them out removes the opportunity for organic use cases

- What happens when we re-use components to allow easy upgrades but then pin those for stability? You still have to update N places. The best case scenario might be you have to update N places but the work to do that is minimised for each element of N

- Creation of an abstraction without enough variety of uses in terms of location and variety of use (a single use-case is essentially a layer that adds no value)

- Inherent contradictions in software design principles - you're taught to 'avoid coupling', but any shared component is by definition coupled. The value of duplication is that it support independent growth or change

- The cost of service templates and/or builders (simple templated text or entire builder-type tools that need to be maintained and used just to boostrap something) - these almost never work for you after creation to support updates

- The cost of fast up-front creation (if you're doing this a lot, maybe you have a different problem) over supporting long-term maintenance

- The value of friction - some friction that makes you question whether a 'New thing' is even needed is arguably good as a screening/design decision analysis step; having to do work to make shared things should help to identify if it's worth doing as the costs of that should be apparent; this frames friction as a way of avoiding doing things that look easy or cost-free but aren't in the long term

- As a project lives longer, any fixed up-front creation time diminishes to a miniscule fraction of the overall time spent

- Continuous, long-term drift detection (and update assistance) is more powerful and useful than a fixed-time upfront bootstrap time saving for any project with a significant-enough lifetime

replies(1): >>46182462 #
2. ajanuary ◴[] No.46182462[source]
> - There is a difference between things that are actually the same (or should be for consistency (such as domain facts, knowledge) versus ones that happen to be the same at the time of creation but are only that way by coincidence

For my money, this is the key point that people miss.

A test I like to use for whether two things are actually or just incidentally related is to think about “if I repeat this, and then change one but not the other, what breaks?”

Often the answer is that something will break. If I repeat how a compound id “<foo>-<bar>” is constructed when I insert the key and lookup, if I change the insert to “<foo>::<bar>” but not the lookup, then I’m not going to be able to find anything. If I have some complicated domain logic I duplicate, and fix a bug in one place but not the other, then I’ve still got a bug but now probably harder to track down. In these cases the duplication has introduced risk. And I need to weigh that risk against the cost of introducing an abstraction.

If I have a unit test `insert(id=1234); item = fetch(id=1234); assert item is not nil`, if I change one id but not the other, the test will fail.

But if I have two separate unit tests, and both happen to use the same id 1234, if I change one but not the other, absolutely nothing breaks. They aren’t actually related, they’re just incidentally the same.

replies(1): >>46182803 #
3. sunrunner ◴[] No.46182803[source]
> A test I like to use for whether two things are actually or just incidentally related is to think about “if I repeat this, and then change one but not the other, what breaks?”

I really like this question as a way of figuring out whether things happen to look the same or actually should be the same for correctness, plus it feels like it should be an easy question to answer concretely without leading you down the path of 'Well we might need this as a common component in the future'.

I also think you can frame it as a same value or same identity type question.

replies(1): >>46182996 #
4. jwarden ◴[] No.46182996{3}[source]
This reminds me of the philosophical distinction between "sense" and "reference" introduced by Frege.

https://www2.lawrence.edu/fast/ryckmant/On%20Sense%20and%20R...