←back to thread

1401 points alankay | 6 comments | | HN request time: 0s | source | bottom

This request originated via recent discussions on HN, and the forming of HARC! at YC Research. I'll be around for most of the day today (though the early evening).
Show context
di ◴[] No.11940134[source]
Hi Alan,

In "The Power of the Context" (2004) you wrote:

  ...In programming there is a wide-spread 1st order
  theory that one shouldn’t build one’s own tools,
  languages, and especially operating systems. This is
  true—an incredible amount of time and energy has gone
  down these ratholes. On the 2nd hand, if you can build
  your own tools, languages and operating systems, then
  you absolutely should because the leverage that can be
  obtained (and often the time not wasted in trying to
  fix other people’s not quite right tools) can be
  incredible.
I love this quote because it justifies a DIY attitude of experimentation and reverse engineering, etc., that generally I think we could use more of.

However, more often than not, I find the sentiment paralyzing. There's so much that one could probably learn to build themselves, but as things become more and more complex, one has to be able to make a rational tradeoff between spending the time and energy in the rathole, or not. I can't spend all day rebuilding everything I can simply because I can.

My question is: how does one decide when to DIY, and when to use what's already been built?

replies(6): >>11940184 #>>11940254 #>>11940350 #>>11940433 #>>11940618 #>>11943999 #
1. jjnoakes ◴[] No.11940254[source]
I tend to do both in parallel and the first one done wins.

That is, if I have a problem that requires a library or program, and I don't know of one, I semi-simultaneously try to find a library/program that exists out there (scanning forums, googling around, reading stack overflow, searching github, going to language repositories for the languages I care about, etc) and also in parallel try to formulate in my mind what the ideal solution would look like for my particular problem.

As time goes by, I get closer to finding a good enough library/program and closer to being able to picture what a solution would look like if I wrote it.

At some point I either find what I need (it's good enough or it's perfect) or I get to the point where I understand enough about the solution I'm envisioning that I write it up myself.

replies(2): >>11940409 #>>11940420 #
2. igorgue ◴[] No.11940409[source]
If you don't have the time or energy for such projects then you CAN'T do them. The answer is there.
3. quantumhobbit ◴[] No.11940420[source]
Yes. If it takes me longer to figure out how to use your library or framework than to just implement the functionality myself, there is no point in using the library.

Some people claim you should still use the 3rd party solution because of the cost of supporting the extra code you have written. But bugs can exist in both my code and the 3rd party code and I understand how to fix bugs in my code much more easily.

replies(1): >>11944585 #
4. MaulingMonkey ◴[] No.11944585[source]
Other points of consideration: My coworkers might not already know some library, but they definitely won't know my library. My coworker's code is just about as "3rd party" as any library - as is code I wrote as little as 6 months ago. Also my job owns that code, so rolling my own means I need to write another clone every time I switch employers - assuming there's no patents or overly litigious lawyers to worry about.

But you're of course correct that there is, eventually, a point where it no longer makes sense to use the library.

> Some people claim you should still use the 3rd party solution because of the cost of supporting the extra code you have written. But bugs can exist in both my code and the 3rd party code and I understand how to fix bugs in my code much more easily.

The problem is I got so tired of fixing bugs in coworker / former coworker code that I eventually replaced their stuff with off the shelf libraries, just so the bugs would go away. And in practice, they did go away. And it caught several usage bugs because the library had better sanity checks. And to this day, those former coworkers would use the same justifications, in total earnestness.

I've never said "gee, I wish we used some custom bespoke implementation for this". I'll wish a good implementation had been made commonly available as a reusable library, perhaps. But bespoke just means fewer eyes and fewer bugfixes.

replies(1): >>11945081 #
5. jjnoakes ◴[] No.11945081{3}[source]
It's all trade-offs.

If there happens to be a well-tested third party library that does what you want, doesn't increase your attack surface more than necessary, is supported by the community, is easy to get up and running with, and has a compatible license with what you are using it in, then by all means go for it.

For me and my work, I tend to find that something from the above list is lacking enough that it makes more sense to write it in-house. Not always, and not as a rule, but it works out that way quite a bit.

I would also argue that if coworkers couldn't write a library without a prohibitive number of bugs, then they won't be able to write application or glue code either. So maybe your issue wasn't in-house vs third party libraries, but the quality control and/or developer aptitude around you.

replies(1): >>11948352 #
6. MaulingMonkey ◴[] No.11948352{4}[source]
You're not wrong. The fundamental issue wasn't in-house vs third party libraries.

The developers around me tend to be inept at time estimation. They completely lack that aptitude. To be fair, so do I. I slap a 5x multiplier onto my worst case estimates for feature work... and I'm proud to end up with a good average estimate, because I'm still doing better than many of my coworkers at that point. Thank goodness we're employed for our programming skills, not our time estimation ones, or we'd all be unemployable.

They think "this will only take a day". If I'm lucky, they're wrong, and they'll spend a week on it. If I'm unlucky, they're right, and they'll spend a day on it - unlucky because that comes with at least a week's worth of technical debt, bugs, and other QC issues to fix at some point. In a high time pressure environment - too many things to do, too little time to do it all in even when you're optimistic - and it's understandable that the latter is frequently chosen. It may even be the right choice in the short term. But this only reinforces poor time estimation skills.

The end result? They vastly underestimate the cost of supporting the extra code they'll write. They make the "right" choice based on their understanding of the tradeoffs, and roll their own library instead of using a 3rd party solution. But as we've just established their understanding was vastly off basis. Something must give as a result, no matter how good a programmer they are otherwise: schedule, or quality. Or both.