←back to thread

225 points todsacerdoti | 1 comments | | HN request time: 0s | source
Show context
rendall ◴[] No.46184800[source]
There’s a well-established Agile technique that in my experience actually succeeds at producing usable estimates.

The PM and team lead write a description of a task, and the whole team reads it together, thinks about it privately, and then votes on its complexity simultaneously using a unitless Fibonacci scale: 1,2,3,5,8,13,21... There's also a 0.5 used for the complexity of literally just fixing a typo.

Because nobody reveals their number until everyone is ready, there's little anchoring, adjustment or conformity bias which are terribly detrimental to estimations.

If the votes cluster tightly, the team settles on the convergent value. If there’s a large spread, the people at the extremes explain their thinking. That’s the real value of the exercise: the outliers surface hidden assumptions, unknowns, and risks. The junior dev might be seeing something the rest of the team missed. That's's great. The team revisits the task with that new information and votes again. The cycle repeats until there’s genuine agreement.

This process works because it forces independent judgment, exposes the model-gap between team members, and prevents anchoring. It’s the only estimation approach I’ve seen that reliably produces numbers the team can stand behind.

It's important that the scores be unitless estimates of complexity, not time. How complex is this task? not How long will this task take?

One team had a rule that if a task had complexity 21, it should be broken down into smaller tasks. And that 8 meant roughly implementing a REST API endpoint of complexity.

A PM can use these complexity estimations + historical team performance to estimate time. The team is happy because they are not responsible for the PM's bad time estimation, and the PM is happy because the numbers are more accurate.

A clear description with background appears in Mike Cohn’s original writeup on Planning Poker: https://www.mountaingoatsoftware.com/agile/planning-poker

replies(1): >>46185246 #
twerka-stonk ◴[] No.46185246[source]
I do like the blind estimation aspect, but I don’t like:

* the arbitrary usage of the Fibonnaci sequence

* a non-clear conversion from complexity to time. Complexity and time aren’t always correlated. Some things can be easy but take a long time. Should that be a 1 or a 5?

Let’s just cut the layer of terminology, in this difficult task, and estimate in units of time directly.

replies(1): >>46185287 #
1. rendall ◴[] No.46185287[source]
The Fibonacci scale isn’t sacred; it’s just a coarse, non-linear scale that keeps people from pretending to have more precision than they actually have. Any non-linear sequence would work as long as it forces you to acknowledge uncertainty as the numbers grow.

As for “just estimate in time,” the problem is that teams are consistently bad at doing that directly. Mixing “how hard is this” with “how long will it take” collapses two separate variables: intrinsic complexity and local throughput. Story points deliberately avoid that conflation. The team’s velocity is what translates points into time at the sprint level, and that translation only stabilizes if the underlying unit is complexity rather than hours.

The whole point of the method is to strip away the illusion of precision. Time estimates look concrete, but they degrade immediately under uncertainty and project pressure. Relative complexity estimates survive discussion, converge reliably, and don’t invite the fallacy that a complex task with a high risk of surprises somehow has an exact hour count.

That’s why the technique exists. Estimating time directly sounds simpler, but in practice it produces worse forecasts because it hides uncertainty instead of exposing it.