←back to thread

318 points alexzeitler | 1 comments | | HN request time: 0.2s | source
Show context
redleggedfrog ◴[] No.42188611[source]
I've gone through times when management would treat estimates as deadlines, and were deaf to any sort of reason about why it could be otherwise, like the usual thing of them changing the specification repeatedly.

So when those times have occurred I've (we've more accurately) adopted what I refer to the "deer in the headlights" response to just about anything non-trivial. "Hoo boy, that could be doozy. I think someone on the team needs to take an hour or so and figure out what this is really going to take." Then you'll get asked to "ballpark it" because that's what managers do, and they get a number that makes them rise up in their chair, and yes, that is the number they remember. And then you do your hour of due diligence, and try your best not to actually give any other number than the ballpark at any time, and then you get it done "ahead of time" and look good.

Now, I've had good managers who totally didn't need this strategy, and I loved 'em to death. But for the other numbnuts who can't be bothered to learn their career skills, they get the whites of my eyes.

Also, just made meetings a lot more fun.

replies(14): >>42189183 #>>42189189 #>>42189248 #>>42189402 #>>42189452 #>>42189674 #>>42189718 #>>42189736 #>>42190599 #>>42190818 #>>42191841 #>>42194204 #>>42194310 #>>42200625 #
andai ◴[] No.42189674[source]
Reminds me of Hofstadter's Law: It always takes longer than you think, even when you take into account Hofstadter's Law.

We could say, always say it will take longer than you think?

Though by this principle, it seems that "overestimates" are likely to be actually accurate?

Joel Spolsky wrote about his time estimation software which recorded the actual time required for completion, and then calculated for each person a factor by which their estimates were off, and this factor was consistent enough that it could be reliably used as a multiplier to improve estimation accuracy.

> Most estimators get the scale wrong but the relative estimates right. Everything takes longer than expected, because the estimate didn’t account for bug fixing, committee meetings, coffee breaks, and that crazy boss who interrupts all the time. This common estimator has very consistent velocities, but they’re below 1.0. For example, {0.6, 0.5, 0.6, 0.6, 0.5, 0.6, 0.7, 0.6}

https://www.joelonsoftware.com/2007/10/26/evidence-based-sch...

replies(2): >>42189863 #>>42190917 #
1. ethbr1 ◴[] No.42189863[source]
Doesn't the article say that for experienced developers, the scaling factor tended to be converge on an average for each individual, even if variable for any particular task?

And Joel sidesteps the unknown-unknowns problem in that piece, by discussing boiling down tasks to <1 day chunks.

But what if you need to build a prototype before you sufficiently understand the project and options to decide on an approach? Where does that time get estimated?

The more projects I work on, the bigger of a fan of spiral development [0] I become.

Because, at root, there are 2 independent variables that drive project scheduling -- remaining work and remaining risk.

This estimation problem would drastically simplify if it allowed for "high confidence, 30 days" and "low confidence, 5 days" estimates.

And critically, that could drive different development behavior! E.g. prototype out that unknown feature where most of the remaining technical risk is.

Trying to instead boil that down to an un-risk-quantified number produces all the weird behaviors we see discussed elsewhere in the comments.

[0] https://en.m.wikipedia.org/wiki/Spiral_model