Most active commenters
  • RangerScience(3)

←back to thread

466 points 0x63_Problems | 16 comments | | HN request time: 1.235s | source | bottom
1. vander_elst ◴[] No.42138032[source]
"Companies with relatively young, high-quality codebases"

I thought that at the beginning the code might be a bit messy because there is the need to iterate fast and quality comes with time, what's the experience of the crowd on this?

replies(9): >>42138075 #>>42138094 #>>42138186 #>>42138274 #>>42138314 #>>42138387 #>>42138735 #>>42139575 #>>42144797 #
2. dkdbejwi383 ◴[] No.42138075[source]
I don't think there's such a thing as a single metric for quality - the code should do what is required at the time and scale. At the early stages, you can get away with inefficient things that are faster to develop and iterate on, then when you get to the scale where you have thousands of customers and find that your problem is data throughput or whatever, and not speed of iteration, you can break that apart and make a more complex beast of it.

You gotta make the right trade-off at the right time.

replies(1): >>42138224 #
3. skydhash ◴[] No.42138094[source]
Some frameworks like Laravel can bring you far in terms of features. You're mostly gluing stuff together on top of an high-quality codebase. It gets ugly when you need too add all the edge cases that every real-world use case entails. And suddenly you have hundreds of lines of if statements in one method.
4. nyrikki ◴[] No.42138186[source]
Purely depends on the ability for a culture that values leaving options open in the future develops or not.

Young companies tend to have systems that are small enough or with institutional knowledge to pivot when needed and tend to have small teams with good lines of communication that allow for as shared purpose and values.

Architectural erosion is a long tailed problem typically.

Large legacy companies that can avoid architectural erosion do better than some startups who don't actively target maintainability, but it tends to require stronger commitment from Leadership than most orgs can maintain.

In my experience most large companies confuse the need to maintain adaptability with a need to impose silly policies that are applied irrespective of the long term impacts.

Integration and disintegration drivers are too fluid, context sensitive, and long term for prescription at a central layer.

The possibility mythical Amazon API edict is an example where focusing on separation and product focus could work, with high costs if you never get to the scale where it pays off.

The runways and guardrails concept seems to be a good thing in the clients I have worked for.

5. nyrikki ◴[] No.42138224[source]
This!

Active tradeoff analysis and a structure that allows for honest reflection on current needs is the holy grail.

Choices are rarely about what is best and are rather about finding the least worst option.

6. AnotherGoodName ◴[] No.42138274[source]
I find messiness often comes from capturing every possible edge case that a young codebase probably doesn’t do tbh.

A user deleted their account and there’s now a request to register that account with that username? We didn’t think of that (concerns from ux on imposter and abuse to be handled). Better code in a catch and handle this. Do this 100x times and you code has 100x custom branching logic that potentially interacts n^2 ways since each exceptional event could probably occur in conjunction with other exceptional events.

It’s why I caution strongly against rewrites. It’s easy to look at code and say it’s too complex for what it does but is the complexity actually needless? Can you think of a way to refactor the complexity out? If so do that refactor if not a rewrite won't solve it.

replies(1): >>42141613 #
7. happytoexplain ◴[] No.42138314[source]
A startup with talent theoretically follows that pattern. If you're not a startup, you don't need to go fast in the beginning. If you don't have talent in both your dev team and your management, the codebase will get worse over time. Every company can differ on those two variables, and their codebases will reflect that. Probably most companies are large and talent-starved, so they go slow, start out with good code, then get bad over time.
8. RangerScience ◴[] No.42138387[source]
IME, “young” correlates with health b/c less time has been spent making it a mess… but, what’s really going on is the company’s culture and how it relates to quality work, aka, whether engineers are given the time to perform deep maintenance as the iteration concludes.

Maybe… to put it another way, it’s that time spent on quality isn’t time spent on discovery, but it’s only time spent on quality that gets you quality. So while a company is heavily focused on discovery - iteration, p/m fit, engineers figuring it out, etc - it’s not making a good codebase, and if they never carve out time to focus on quality, that won’t change.

That’s not entirely true - IMO, there’s a synergistic, not exclusionary relationship between the two - but it gets the idea across, I think.

9. JohnFen ◴[] No.42138735[source]
> what's the experience of the crowd on this?

It's very hard to retrofit quality into existing code. It really should be there from the very start.

10. randomdata ◴[] No.42139575[source]
In my experience you need a high quality codebase to be able to iterate at maximum speed. Any time someone, myself included, thought they could cut corners to speed up iteration, it ended up slowing things down dramatically in the end.

Coding haphazardly can be a lot more thrilling, though! I certainly don't enjoy the process of maintaining high quality code. It is lovely in hindsight, but an awful slog in the moment. I suspect that is why startups often need to sacrifice quality: The aforementioned thrill is the motivation to build something that has a high probability of being a complete waste of time. It doesn't matter how fast you can theoretically iterate if you can't compel yourself to work on it.

replies(1): >>42140206 #
11. RangerScience ◴[] No.42140206[source]
> thought they could cut corners to speed up iteration

Anecdotally, I find you can get about 3 days of speed from cutting corners - after that, as you say, you get slowed down more than you got sped up. First day, you get massive speed from going haphazard; second day, you're running out of corners to cut, and on the third day you start running into problems you created for yourself on the first day.

replies(1): >>42140494 #
12. stahorn ◴[] No.42140494{3}[source]
A piece of advice I heard many years ago was to not be afraid to throw away code. I've actually used that advice from time to time. It's not really a waste of time to do a `git reset --hard master` if you wrote shit code, but while writing it, you figured out how you should have written the code.
replies(1): >>42143040 #
13. unregistereddev ◴[] No.42141613[source]
I agree. New codebases are clean because they don't have all the warts of accumulated edge cases.

If the new codebase is messy because the team is moving fast as parent describes, that means the dev team is doing sloppy work in order to move fast. That type of speed is very short lived, because it's a lot harder to add 100 bugfixes to an already-messy codebase.

14. Groxx ◴[] No.42143040{4}[source]
Very much yes.

There's little reason to try to go straight for the final product when you don't know exactly how to get there, and that's frequently the case. Build toys to learn what you need efficiently, toss them, and then build the real thing. Trying to shoot for the final product while also changing direction multiple times along the way tends to create code with multiple conflicting goals subtly encoded in it, and it'll just confuse you and others later.

replies(1): >>42150295 #
15. torginus ◴[] No.42144797[source]
My experience is that once success comes, business decides to quickly scale up the company - tons of people are hired, with most of the not having any experience with the hoot (or indeed give a hoot). Rigid management structures are created, inhabited by social climbers. A lot of the original devs leave etc.

That's the point when a ton of disinterested, inexperienced, and less handpicked people start pushing code in - driven not by the need to build good software, but to close jira tickets.

This invariably results in stagnating productivity at best, and upper management wondering why they are often not delivering on the pre-expansion level, let alone one that would be expected of 3x the headcount.

16. RangerScience ◴[] No.42150295{5}[source]
Came across the idea of "probe" (awhile ago) as a name for this.