Most active commenters
  • kqr(4)

←back to thread

268 points behnamoh | 12 comments | | HN request time: 0.535s | source | bottom
Show context
atoav ◴[] No.28667999[source]
For predicting the daily schedules on a film set I always "ran a simulation" of what would be done that day and just summed the predicted minutes. The simulation ran in my head of course, but it included things like: Actors drinking coffee and chatting, costumes getting ready, Camera department forgot memory card in the car, lunch breaks, someone arrives late, etc.

Obviously the major chunk were always scenes and they are usually also the major contributor to the insecurity of the prediction. E.g. working with people who you don't know, weather, technical problems (broken, missing stuff), stuff that just won't work (animals, certain scenes with actors).

But in the end what always mattered was that there was a time plan for each day and at the end of a day we would know wheter we are A) faster as predicted, B) on time or C) slower than predicted. The next day would then be restructured accordingly by the production and usually you'd be back on time by the end of that.

I was usually spot on with my predictions and we never had any issue with getting the planned stuff done.

With programming the whole thing is harder, because it can be even more unpredictable. But what definitly always helps is when you have a feeling for whether you are too slow, on time or you managed to build a time buffer for future doom.

replies(2): >>28669348 #>>28672442 #
1. regularfry ◴[] No.28669348[source]
Tom DeMarco talks about modelling-based estimates in Waltzing With Bears, mainly to break people out of the error they fall into of treating the soonest possible time something could be done as a realistic estimate of when something will actually be finished. There are also approaches like Function Point Analysis which provide an explicit model that you can calibrate your team against.

It's doable, but what people tend to forget is that it's work. If you want an estimate, I need to be able to expend effort providing it. It's an engineering activity that needs organisational support to do it at all well, but often you find an expectation that people will be able to pull an estimate out of a hat just having heard the faintest description of the problem, and there can often be a tacit belief (usually but not entirely from non-technical folks) that not being able to do so makes one incompetent.

replies(1): >>28669973 #
2. kqr ◴[] No.28669973[source]
This is where range-based estimations really shine. If you want an estimation right now, I will tell you on the spot that, "I'm 95 % certain it will be done no later than nine months from now, but probably sooner. However, I know it won't be done this week."

You want a narrower range than 0.25–9 months? You'll have to let me think about it. Maybe I can be just as certain that it will be done 1–5 months from now, if I get time to mentally run through the simulation, to borrow the terminology from upthread.

You want a narrower range than 1–5 months? I don't have the information I need to give you that. If you give me a couple of weeks to talk to the right people and start designing/implementing it, the next time we talk, maybe I have gotten the range down to 1–3 months.

I can always give you an honest range, but the more you let me work on it, the narrower it gets.

----

This is of course what's suggested in How To Measure Anything, Rapid Development, and any other text that treats estimation sensibly. An estimation has two components: location and uncertainty. You won't ever get around that, and by quoting a single number you're just setting yourself up for failure.

replies(5): >>28670533 #>>28672402 #>>28674059 #>>28674824 #>>28680172 #
3. jaymzcampbell ◴[] No.28670533[source]
I've been finding this approach incredibly useful too with teams and trying to manage the needs and concerns of business vs product vs engineering. I quite liked how it's described here: https://spin.atomicobject.com/2009/01/14/making-better-estim...
replies(1): >>28670639 #
4. kqr ◴[] No.28670639{3}[source]
What I disagree with when it comes to fuzzy labels like "aggressive but possible" or "highly probable" is that they're still unverifiable and, frankly, just as meaningless as point estimates.

This is where actual probabilities come in: if you give me 90 % probability ranges (i.e. you think there's a 90 % chance the actual time taken will fall inside the range you give me) that provides me with three extremely powerful tools:

1. First of all, I can use Monte Carlo techniques to combine multiple such estimations in a way that makes sense. This can be useful e.g. to reduce uncertainty of an ensemble of estimations. You can't do that with fuzzy labels because one person's range will be a 50 % estimation and someone else's will be an 80 % one.

2. I can now work these ranges into economic calculations. The expected value of something is the probability times consequence. But it takes a probability.

3. Third, but perhaps even more important: I can now verify whether you're full of shit or not (okay, the nicer technical term is "whether you're well-calibrated or not".) If you keep giving me 90 % ranges, then you can be sure I'm going to collect these and make sure that historically, the actual time taken falls into that range nine out of ten times. If it's not, you are overconfident and can be trained to be less confident.

The last point is the real game changer. A point estimate, or an estimate based on fuzzy labels, cannot ever be verified.

Proper 90 % ranges (or whatever probability range you prefer) can be verified. Suddenly, you can start applying the scientific method to estimation. That's where you'll really take off.

replies(2): >>28671334 #>>28672487 #
5. jaymzcampbell ◴[] No.28671334{4}[source]
I understood the fuzzy labels to still only refer to a specific probability range, e.g. the meaning of "aggressive but possible" to relate to the likes of your I'm 95 % certain it will be done no later than nine months from now... example. Those labels seemed to at least help explain "this isn't just a high ball figure".

To be honest I still don't really think any of this stuff can be truly verified beyond actually doing it or having a very well understood set of requirements that have been worked against plenty of times before.

replies(1): >>28671818 #
6. kqr ◴[] No.28671818{5}[source]
Sure, but it's important to spell out exactly which probability range they refer to -- unless you ground people in concrete numbers, they have a tendency to think they mean the same thing but actually mean very different things. (This is known as the illusion of agreement, for further reference.)

About verification I think you're right in a very specific sense: you clearly cannot verify that any single estimation is correct, range or not. However, meteorologists and other people dealing with inherent and impenetrable uncertainty have found out that a historic record of accuracy is as good as verification.

7. regularfry ◴[] No.28672402[source]
Absolutely, yes, and if you're in an organisation that's mature enough to handle ranges responsibly and not treat the lower number as a prediction, that's absolutely the best way to do it.
replies(1): >>28673737 #
8. frazbin ◴[] No.28672487{4}[source]
mind blown
9. kqr ◴[] No.28673737{3}[source]
Whenever I speak to people who would do that, I leave the lower end of the range unspecified. (I.e. instead of 90 % between x and y, I phrase it as 95 % less than y.)
10. jacobolus ◴[] No.28674059[source]
I wish more people were willing to provide quick wide-interval estimates.

For instance, we have been working with a general contractor on a house remodel, and he refuses to give ballpark estimates (time or money) for anything, I think out of fear that we’ll later hold his guesses against him; if I want an estimate he’ll only reply with something fairly narrow after several days or a week, after putting in unnecessarily rigorous effort.

Since we don’t know the field and he doesn’t perfectly understand our priorities and preferences, this slow feedback loop is very frustrating: it prevents us from iterating and exploring the the space of possibilities, wastes his time precisely estimating stuff that we could make decisions about using a rough estimate, and wastes our time trying to get estimates from other sources which have less knowledge of the context.

11. composer ◴[] No.28674824[source]
> An estimation has two components: location and uncertainty

ReRe's Law of Repetition and Redundancy [5] could benefit from a refinement that accounts for the inverse relationship between width-of-delivery-window and certainty-of-delivery-date... maybe:

  A programmer can accurately estimate the schedule for only the repeated and the redundant. Yet,
  A programmer's job is to automate the repeated and the redundant. Thus,
  A programmer delivering to an estimated or predictable schedule is...
  Not doing their job (or is redundant).
[5] https://news.ycombinator.com/item?id=25826476
12. dwd ◴[] No.28680172[source]
I still don't understand why Spolsky's Evidence Based Scheduling didn't get much traction. Defining the completion date as a range of probabilities makes the most sense.

https://www.joelonsoftware.com/2007/10/26/evidence-based-sch...

As implemented in Fogbugz:

https://fogbugz.com/evidence-based-scheduling/