You always see rookies make the mistake of just adding up the individual time of all the tasks - and engineers are famously bad at making horribly under-scoped estimates - so projects estimated this way have typically went over time by a factor of three or so as I've come to learn...
A better approach would be to break it down as much as possible. This has the following benefits:
* You will identify tasks you haven't though of
* You will get a better picture of the total effort needed
* It will be easier to provide grounds for your estimates to stakeholders
* Developers will see progress as tasks get closed
* Commits can be traced to specific tasks
* Better estimates for remaining work during sprint/iteration/what-ever-you-call-it
Unfortunately, not funny enough for me to want to use it, and the attempt to rationalize it only makes it worse. (iow, that's not a particularly funny joke either)
What you are suggesting is basically micromanagement-by-scrum.
* Identification of tasks you didn't think of is a fallacy in software engineering. There are very few actual tasks when it comes to writing the software. Yes, sometimes you can break a change. For instance across components or features, but most often that has already been done when a developer gets a task. Most of the additional tasks you can come up with are either trivial, come after the end of development (e.g., deployment), or are completely arbitrary and not self-contained (write this class or that function). Very seldom a software development task can be broken down into several roughly equally complex subtasks.
* It follows that you don't improve your estimate by writing down all these side tasks in your task tracker. You increase the busywork and friction and become less agile. These subtasks tend to become stale quite quickly when requirements change.
* Your stakeholders don't understand anything about aoftware development. Unless your task is trivial to begin with, there is little ground in any estimate.
* Developers either see progress through code reviews, technical discussions, or the source code itself. No need to maintain a list of tasks in some tracker for them.
* Commits should be self-contained and have a meaningful message anyways. It should be trivial to trace them to the main task.
* Your estimates won't improve at all if you break down your task arbitrarily.
If you create subtasks like "write the tests" it might help a junior to structure their work approach, but in a professional environment it is pointless. Worse, if time is at the essence, some product owner might ask you to deploy "just an MVP" before that subtask is done, postponing it forever.
The only reason to break down a task is when there's a time-like dependency in the process. E.g., I can do X after Y gets deployed. Or I need to remove Z once Q is done. Mostly, tasks like Y or Q will involve operational aspects (data migration, experiments) so the subtask will have a different owner or at least a different context.
Better still is to do that, and then multiply the end result by π.
(as explained in the article, it's mostly to compensate for scope creep, and that isn't in your list of benefits)
Obviously the major chunk were always scenes and they are usually also the major contributor to the insecurity of the prediction. E.g. working with people who you don't know, weather, technical problems (broken, missing stuff), stuff that just won't work (animals, certain scenes with actors).
But in the end what always mattered was that there was a time plan for each day and at the end of a day we would know wheter we are A) faster as predicted, B) on time or C) slower than predicted. The next day would then be restructured accordingly by the production and usually you'd be back on time by the end of that.
I was usually spot on with my predictions and we never had any issue with getting the planned stuff done.
With programming the whole thing is harder, because it can be even more unpredictable. But what definitly always helps is when you have a feeling for whether you are too slow, on time or you managed to build a time buffer for future doom.
I was new to the field (but not new to software development) and there was this small software team doing programming tasks for the game. The lead developer was concerned on my performance after a few months I was there.
I remember him drawing an image excatcly like the second picture in this article (an arrow going from A to B). He said that my performance was very poor, and then he drew another picture that was like the circle in the article.
The way I worked was searching for a solution, going wrong direction a few times, asking designers for more information and then eventually landing on a solution (that worked, and users like it).
But I was told this is wrong way of doing software. I was not supposed to ask advice from the users (because the team "knew better").
He also told me that a good software developer takes a task, solves it (goes from A to B), and then takes another task.
After a few weeks I was fired from that job.
To this day I'm still baffled by this. The company was really succesfull and everyone knew how to make software. It seemed like a very harsh environment. Is it like this in the top software companies everywhere? Like the super-pro-developers really just take a task and solve it without issues?
In this case h-d is 8x, d-w is 5x, w-m is 4x and m-y is 12x.
This is after all just unimportant pedantic, but if clear and simple communication is the goal, it probably makes more sense to describe it as the next calendar unit up.
Surprised by the downvotes. I'm just sharing what I've learned the hard way over the last 6 years.
We can easily tell that the sprints where we managed to break something down further, has been better estimated and more productive.
Call it micromanagement if you'd like. What is working for one team might not work for another.
What's weirdest about your story is that you were laid off a few weeks into your job. People usually get more time to get the hang of it even if they are senior, so I would mostly assume that it was a cultural fit rather than performance.
Some outsourcing/service (billed by the hour — which explains it for the most part) companies would look for very strict delivery cadence with focus on exactly the process you describe, but you'd be unlikely to have contact with the user in that case (just the customer).
Most importantly, if you ever get fired, be sure to ask for the explanation so you don't end up being baffled.
I can also tell you that maximum code quality is not always the priority. This is especially true in games, where you ship and then move onto the next project. Even online games nowadays often shut down after not long, and so they don't need quite as much maintenance as a successful SAAS.
Again in my experience, the super-pro devs I know are particularly good at knowing when to be fast and reckless, and when to be slow and calculating.
"everyone knew how to make software" -> if that was the case, you should have found help and support around you organically.
I hope you're in a better situation now. Don't let this kind of experience hurt your self-esteem. You deserved a better place.
If your former team lead went from a -> b with no problem, maybe he is part of the circle. Afterall, when you zoom in the circle close enough, it appears a straight line. Or he could be a visionary, I can't tell.
An organization is like a shark, it needs to move forward. That organization can be an one-man team, where thinking overlaps with doing. In larger org, specialization is inevitable. The "no question ask, get things done" way that your former team lead adopted is desirable to certain degree, some of the time. But it is important to be aware of the duality. "Disagree but commit" is a virtue.
A fish can teach you swim, but it probably doesn't know it is in water
The strange thing is that both approaches can be successful!
However, they are not compatible. At most, it's possible to have an out-of-the-box thinker above a team of robots, but that's about the maximum extent of mixing that works.
For example, in the robot-based team, managers generally don't explain any of the inputs and constraints that went into a decision to their underlings. This saves time, allowing them to manage more people to get more done. However, the creative types question everything, which irritates and frustrates them. In an all-creative team, managers will explain the backstory to the junior staff, giving them the information they require to seek alternative but compatible solutions. This takes longer, but then they can offload a lot of the decision making to the juniors.
Don't feel bad that you didn't fit in, it just means that you need to find a place with a corporate culture that suits you better.
> 29. (von Tiesenhausen's Law of Program Management) To get an accurate estimate of final program requirements, multiply the initial time estimates by pi, and slide the decimal point on the cost estimates one place to the right.
https://spacecraft.ssl.umd.edu/old_site/academics/akins_laws...
Eisenhower's quote "Plans are useless, planning is essential" matters here. Having clear checklist of things to do before release (same as a surgeon or pilot) is absolutely useful in realising just how much f######g work is ahead of you.
This helps the team slow down and be less optimistic (the developers natural state).
Yes I agree with below comments that the code is the source of truth (and you need to find ways to sync from the code to other management systems (like tickets). The worse this sync is the less software orientated a company is.
But being able to link your commits to some "ID number" is totally helpful in this regard. No matter what we need to co-ordinate with others in the org, and that co-ordination had better be automated 95% of the time or we are in a world of meetings.
But yes, the essence of the article is "you can never predict the unknown, so multiply by X as a buffer". [#]
The distinction between the article and the commenter is the article hopes to compensate for "unknown future events" and the commenter believes he can reduce his own amount of accidental estimation mistakes. Both can co-exist.
It is reasonable to take into account all our own mental failures when making estimates (and the commentators checklist style can help improve that) - but you cannot take into account the inherent problems (hence using pi).
I would even go so far as to say if you just do quick estimates, multiply by 9, if you take care in your estimates, you can just multiply by 3.
[#] The organisational problem with this is it is fine for the estimators to multiply up, by the more humans in the reporting chain, the more they add buffers (every project manager knows to multiply estimates by 3) so very soon buffers get wildly out of hand. Having things like JIRA where the estimators directly report their estiamtes (and senior mgmt gets the sum) has drastically changed the reliability of through organisational reporting and is leading to the death of project management as a profession.
[my honest best estimate thinking off all I can think of] * X
+1 on the factor for any of these:
- always +1 right away (so it's x2)
- tech is new for you
- other teams involved
- you don't have a clear idea of what tasks are involved (it's not yet another X)
- you see additional risks
So if it's making a yet another micro marketing website based on an pixel perfect photoshop design: x2If the design is not pixel perfect or missing responsive: the customer is involved: x3
If you refactor stuff in to new tech and it changes apis to other teams: new tech, other people, risks, no clear tasks: x5
There's some confounding involved, of course - for example, a good rapport with teammates can mitigate the effects of private issues. But my point is that you can't just pair people up and expect productivity to improve; it'll likely decrease, and it'll likely swing wildly from day to day.
--
[0] - With respect to division of work on the task. Like e.g. a long time ago I was in a gamedev competition with a team of artists. I was nowhere near as good a programmer as they were artists, but we decided to make an art-heavy game, so our workloads balanced out and we proceeded almost in lock step.
Much more accurate are estimates of difficulty or complexity (small medium large, weighted numbers, whatever). The manager can then make an average conversion rate to time (with error bars), and use that for medium/long term forecasting. This is demonstrated extremely accurate compared to time estimation (but it is useless and breaks if you apply time/points pressure to your team, or even let them think in terms of a time conversion rate).
This is not new research. Kahneman's Nobel prize about decisions under uncertainty was in 2002. Put away the superstitions and stop estimating with time!
Maybe it is that you said break it down as much as possible rather than as much as necessary.
I hate administrative busy work and micromanagement, but planning is essential for almost any large project, and planning means breaking things down into smaller steps. Estimates are data that drives all planning, so if your estimates are really that far wrong you have to take steps to manage that issue, not just multiply by 3 and call that your estimate.
Breaking down tasks further absolutely can help estimates (within reason) and it can help identify other tasks or dependencies etc.
Sometimes a task is just really difficult to give an estimate for no matter how much it is broken down. Sometimes you don't even know how to break it down or what exactly the steps along the way would be, you may never have done something similar before, you may be relying on development of novel techniques.
Still, in those cases you still don't just multiply by 3 and call that your estimate. You have to call out that as a major risk in your plans, and the project may have to be changed accordingly. For example it might be decided that in fact the company won't start making commitments or wider plans on the success of this project, but rather run a prototype project instead.
When they say 'Good, Fast, Cheap: pick 2', everybody always assumes the point is you have to pick Good and the choice is just between Fast and Cheap.
But Fast and Cheap has its place, ask any game developer ...
So I suppose the real answer depends on the environment you're in. If the project is meant to be agile then you probably need a bit of interaction with the user to get the job done. If the project is not agile and all the requirements could be known up-front then you're probably wasting effort having programmers determine what those requirements are. IMHO.
The wise scheme, as has been pointed out, is to adjust predictions during a project. If the task that was initially planned to take a day is routinely taking 2 days (or 2 hours), adjust future plans accordingly.
Higher-level managers are sometimes unhappy with this scheme, while middle-level ones value the accuracy of such predictor-corrector schemes.
This gets us into a related topic of the depth of management structures, and I think the military scheme (of each person directing a roughly fixed number of persons, so the number of levels is proportional to the log of workforce size) might be worth considering.
I asked for the reason, and it was indeed the performance.
It was just the diagrams in the original article that reminded me of this. It just didn't make any sense to me that one would "just solve" a problem at hand without considering other options.
But in my (15 years of) experience the "pi factor" is indeed quite accurate as there is always something surprising that comes up along the way, be it specification changes or technical issues.
Firstly, programmers are notoriously and famously bad at estimation, so it's highly likely that you do need to find the multiplier for yourself. This illustration is something like fifteen years old by now: https://pbs.twimg.com/media/EyNRiYQXMAIshEj.jpg
Secondly, wise managers know to expect this, and do the multiplication themselves. But if you have managers who do the opposite and always scale the estimation back, expecting the results earlier that promised, then you need to use a second multiplier—which the manager will unknowingly annul, getting closer back to your actual figure. Probably something around 1.5 to 2.
However, if you need the second multiplier then you also gotta ask yourself why you're staying in that company.
I guess games could have much higher sales (especially when the thing is new and hot) if they wouldn't release utter garbage for the most time. Just go and ask people whether they're keen on buying a bunch of bugs for 60 to 80 bucks. Most people aren't. Only a very small group of die hard fans does this.
The whole indie scene wouldn't stand a chance likely if the big players were able to deliver stuff that works actually before the first couple of patches. It's a kind of joke that one or a few people can build much better games than multi-billion companies. The problem is the mindset at the later!
I'm not advocating for "maximum code quality" as you don't get that anywhere anyway for any reasonable price. But game releases are just far beyond any pain point. The usual advice is: Don't touch, don't buy, until they proved that they're willing to fix their mess!
The games industry would need to walk a really long distance before they could get rid again of this public perception. But they need to start somewhere. Otherwise their reputation will reach absolute zero real soon now. They're already almost there…
On the other hand, I once mentored a person working in another company. I've been giving advice that I would give to a junior Googler. They got fired :(
If the problem is not some trivial configuration or otherwise mundane change I measure it in weekly units. With weekly units it's unlikely for you to have a blow out of more than 50% with out clearly seeing it coming whereas when you say it'll take a day or two you can so easily have a blow out of 100-200%.
I think the model works, but didn’t totally match that agency. It’s a good theory though
I don't do that anymore. I try to push estimates as high as possible and then collaboratively cut down on requirements/promises/features to match an expected (time) budget.
This often leads to more pragmatic work items and sensible prioritization from the start. And it is an opportunity for general communication and understanding the value of things.
Maybe you went over somebodys head, or maybe rank played a part in this. Some people also want you to follor their way or the highway, unfortunately I've seen this many times in different software companies.
Probably in this kind of teams execution is what matters, not thinking for yourself.
- Order of magnitude differences - hard deadlines (like a public event you are supporting)
Usually estimations between two things don’t matter as much as prioritizing those two things. Work on the thing that matters the most. If it takes 2 days or 5 days, in most cases it won’t matter - it’s still what you would work on, so functionally estimation wasn’t needed.
If it is a difference of 2 days vs 2 weeks (or some meaningful magnitude to you) then you start having opportunity cost that factors into prioritizing.
Estimation is more often used as a tool for “control and prediction” - how many times have you had incredible success because you controlled or predicted an outcome?
Tell people a short number now = immediate praise, long term pain.
Tell people a higher estimate, immediate negative; finish in time = delayed gratification.
It's doable, but what people tend to forget is that it's work. If you want an estimate, I need to be able to expend effort providing it. It's an engineering activity that needs organisational support to do it at all well, but often you find an expectation that people will be able to pull an estimate out of a hat just having heard the faintest description of the problem, and there can often be a tacit belief (usually but not entirely from non-technical folks) that not being able to do so makes one incompetent.
Through retrospectives teams slowly but surely create a checklist of things to consider when estimating and my experience is that teams get better and better at anticipating the scope of a task and estimating it.
It seems that these toxic environments thrive when the top level of the management encourages this behaviour.
For example, when there is an abstract or undefined problem to be solved, the 'free thinking' people are super valuable because they can search for hidden requirements, think about edge cases and most importantly, challenge established ideas to come about with a better solution.
On the other hand, sometimes the solution is clearly known and you just need to grind it out. Think about the code that you've written over and over again (for us it is market data feed handlers) and there's nothing novel about it. Just gotta get the work done. I've seen some people try to reinvent the wheel for these tasks and its just not needed.
We had this joke at one company where we'd say "are you rewriting rsync?" because every once in a while someone would try to do something brand new and shiny when the tech was already defined and the parts needed to be assembled. Conversely, we also had some folks who did things that were incredibly creative and fresh. It's all about balance.
Problem is, execs would like 300M revenue now, with a buggy PoS, than 500M after 6-12 months. Because those 300M can let them make another crappy game and release it 6 months earlier (12 if both skip on quality). Then you are missing 400M revenue, but you get 600M 12 months earlier. That recoups costs and looks nicer on reports.
Or in other words, gamers hate buggy releases, but not enough to change the practice.
I've always been happiest working in places where I focused my efforts diving into the technology rather than interviewing salespeople to find out exactly what they promised the customer.
Tends to be a big signal for lack of personal accountability from the person saying it.
You want a narrower range than 0.25–9 months? You'll have to let me think about it. Maybe I can be just as certain that it will be done 1–5 months from now, if I get time to mentally run through the simulation, to borrow the terminology from upthread.
You want a narrower range than 1–5 months? I don't have the information I need to give you that. If you give me a couple of weeks to talk to the right people and start designing/implementing it, the next time we talk, maybe I have gotten the range down to 1–3 months.
I can always give you an honest range, but the more you let me work on it, the narrower it gets.
----
This is of course what's suggested in How To Measure Anything, Rapid Development, and any other text that treats estimation sensibly. An estimation has two components: location and uncertainty. You won't ever get around that, and by quoting a single number you're just setting yourself up for failure.
This is true, but it's not the tradeoff most people think. It's not "this way is better but takes more time", it's "spend time now vs. spend time later."
In my experience, you will always spend the time. Spending it earlier can be difficult when you're on a tight schedule which is driving the process, but you'll always spend at least as much time later.
not in mine, but I've seen a fair amount of very one-off, time-limited projects (the shortest being a client calling a morning for a very simple customized video playback app needed for that very evening, and a lot of it being only needed for a few months)
My best estimates were more along the lines of 'how long did this take last time' and then building a schedule by subdividing that.
This is where actual probabilities come in: if you give me 90 % probability ranges (i.e. you think there's a 90 % chance the actual time taken will fall inside the range you give me) that provides me with three extremely powerful tools:
1. First of all, I can use Monte Carlo techniques to combine multiple such estimations in a way that makes sense. This can be useful e.g. to reduce uncertainty of an ensemble of estimations. You can't do that with fuzzy labels because one person's range will be a 50 % estimation and someone else's will be an 80 % one.
2. I can now work these ranges into economic calculations. The expected value of something is the probability times consequence. But it takes a probability.
3. Third, but perhaps even more important: I can now verify whether you're full of shit or not (okay, the nicer technical term is "whether you're well-calibrated or not".) If you keep giving me 90 % ranges, then you can be sure I'm going to collect these and make sure that historically, the actual time taken falls into that range nine out of ten times. If it's not, you are overconfident and can be trained to be less confident.
The last point is the real game changer. A point estimate, or an estimate based on fuzzy labels, cannot ever be verified.
Proper 90 % ranges (or whatever probability range you prefer) can be verified. Suddenly, you can start applying the scientific method to estimation. That's where you'll really take off.
A lot of the time when you are working the delays aren't just the uncertainty of an individual task; it's that you're working on several projects at once, you're attending meetings, etc, so that you might only average 2 hours / day working on a particular task or you might not be able to start a task for several weeks (depending on whether you work sequentially or in parallel).
You still need a realistic, achievable estimate for that first time, in case management calls your bluff, but distinguishing between "the amount of effort this will take" and "how long it will be before it is complete" can help set realistic expectations while making it harder for management to mistakenly think you need two months to complete a task that could be done in two weeks.
To be honest I still don't really think any of this stuff can be truly verified beyond actually doing it or having a very well understood set of requirements that have been worked against plenty of times before.
About verification I think you're right in a very specific sense: you clearly cannot verify that any single estimation is correct, range or not. However, meteorologists and other people dealing with inherent and impenetrable uncertainty have found out that a historic record of accuracy is as good as verification.
https://www.purplemath.com/modules/bibleval.htm
https://answersingenesis.org/contradictions-in-the-bible/as-...
Why does software have such extreme orders-of-magnitude variability? Anyone's guess. I like the perspective that software is made out of many pieces of little software, which are in turn made of even more smaller pieces of software. That fractal nature is a qualitative difference to people, which are not made of many tiny people. (As far as I know.)
For instance, we have been working with a general contractor on a house remodel, and he refuses to give ballpark estimates (time or money) for anything, I think out of fear that we’ll later hold his guesses against him; if I want an estimate he’ll only reply with something fairly narrow after several days or a week, after putting in unnecessarily rigorous effort.
Since we don’t know the field and he doesn’t perfectly understand our priorities and preferences, this slow feedback loop is very frustrating: it prevents us from iterating and exploring the the space of possibilities, wastes his time precisely estimating stuff that we could make decisions about using a rough estimate, and wastes our time trying to get estimates from other sources which have less knowledge of the context.
ReRe's Law of Repetition and Redundancy [5] could benefit from a refinement that accounts for the inverse relationship between width-of-delivery-window and certainty-of-delivery-date... maybe:
A programmer can accurately estimate the schedule for only the repeated and the redundant. Yet,
A programmer's job is to automate the repeated and the redundant. Thus,
A programmer delivering to an estimated or predictable schedule is...
Not doing their job (or is redundant).
[5] https://news.ycombinator.com/item?id=25826476You can know really well how long a light person will take to do a thing, or how long the camera will take to find an angle. Knowing when your client will move their arse to send you the test_data.csv is very much unknowable.
[1] https://www.quora.com/Why-are-software-development-task-esti...
https://www.joelonsoftware.com/2007/10/26/evidence-based-sch...
As implemented in Fogbugz:
80% of the feature will take 20% of the time. The last 20% will take 80% of the time once you factor in all the other things like:
- getting the feature working correctly to all specifications.
- thinking about larger scale impact of your code and how your team might react to it, and deciding on which implementation to take.
- resolving missing edge cases.
- previously undescribed features the design team overlooked, but is now obviously wrong now that they can see it in action.
- time it takes to roundtrip the review process.
- resolving unrelated bugs that code changes may have created.
- writing tests.
- manual testing.
- fixing bugs found in regression tests.
- deployment and deployment related errors (happens rarely, but you have to average it out).
Just getting to "I got this feature working!", seems to only be a small part of the whole process.Good for game-development might be "a good game", not "good code", for example.
(I know you likely know this, but it's an interesting discussion point I think!)
And I often use coastlines as an example to explain why software development is fractal. Will use your link in the future, thanks!