Most active commenters
  • janto(6)
  • kqr(5)
  • (4)
  • entrep(3)
  • necovek(3)
  • kornakar(3)
  • jcutrell(3)
  • dgb23(3)
  • girvo(3)

268 points behnamoh | 124 comments | | HN request time: 1.935s | source | bottom
1. airoftime ◴[] No.28667199[source]
That's a good one. I really like the sin graph representation .
replies(2): >>28667693 #>>28667738 #
2. mattbillenstein ◴[] No.28667416[source]
Ha, I always said 3, I never knew this existed...

You always see rookies make the mistake of just adding up the individual time of all the tasks - and engineers are famously bad at making horribly under-scoped estimates - so projects estimated this way have typically went over time by a factor of three or so as I've come to learn...

replies(1): >>28668041 #
3. th3iedkid ◴[] No.28667693[source]
you cannot unless co-sin'd
4. Miiko ◴[] No.28667738[source]
That's not a sin graph, though, but 2 semi-circles.
replies(1): >>28667826 #
5. OskarS ◴[] No.28667826{3}[source]
The arc-length of a sin curve is an elliptic integral, and project time estimation is hard enough without throwing elliptic integrals into the mix!
6. entrep ◴[] No.28667859[source]
Is this doable?

A better approach would be to break it down as much as possible. This has the following benefits:

  * You will identify tasks you haven't though of

  * You will get a better picture of the total effort needed

  * It will be easier to provide grounds for your estimates to stakeholders

  * Developers will see progress as tasks get closed

  * Commits can be traced to specific tasks

  * Better estimates for remaining work during sprint/iteration/what-ever-you-call-it
replies(4): >>28667980 #>>28667993 #>>28668420 #>>28668669 #
7. janto ◴[] No.28667862[source]
In my head I usually do "an order of magnitude" increase to do things properly. 1h to 1d to 1w to 1M to 1Y.

Thats kind of like a multiplier of τ=2π

replies(1): >>28667900 #
8. necovek ◴[] No.28667871[source]
For something with such a large margin for error, it can only be funny to use something with multiple digits of precision (like Pi is).

Unfortunately, not funny enough for me to want to use it, and the attempt to rationalize it only makes it worse. (iow, that's not a particularly funny joke either)

9. pantulis ◴[] No.28667897[source]
That's interesting becase x3 is one of the axis of the famouse "from code to software product" chart from Fred Brooks "Mythical Man Month". But there are two of them, so really it's x9, almost an order of magnitude.
replies(1): >>28668349 #
10. porb121 ◴[] No.28667900[source]
....that's not an order of magnitude each time
replies(3): >>28667928 #>>28667983 #>>28669242 #
11. janto ◴[] No.28667928{3}[source]
a calendar "order of magnitude". I can't think of a better term.
replies(2): >>28667962 #>>28668449 #
12. powersnail ◴[] No.28667962{4}[source]
Perhaps “up a unit of measurement”?
replies(2): >>28667996 #>>28668802 #
13. choeger ◴[] No.28667980[source]
Let me guess, you work as a low-level manager?

What you are suggesting is basically micromanagement-by-scrum.

* Identification of tasks you didn't think of is a fallacy in software engineering. There are very few actual tasks when it comes to writing the software. Yes, sometimes you can break a change. For instance across components or features, but most often that has already been done when a developer gets a task. Most of the additional tasks you can come up with are either trivial, come after the end of development (e.g., deployment), or are completely arbitrary and not self-contained (write this class or that function). Very seldom a software development task can be broken down into several roughly equally complex subtasks.

* It follows that you don't improve your estimate by writing down all these side tasks in your task tracker. You increase the busywork and friction and become less agile. These subtasks tend to become stale quite quickly when requirements change.

* Your stakeholders don't understand anything about aoftware development. Unless your task is trivial to begin with, there is little ground in any estimate.

* Developers either see progress through code reviews, technical discussions, or the source code itself. No need to maintain a list of tasks in some tracker for them.

* Commits should be self-contained and have a meaningful message anyways. It should be trivial to trace them to the main task.

* Your estimates won't improve at all if you break down your task arbitrarily.

If you create subtasks like "write the tests" it might help a junior to structure their work approach, but in a professional environment it is pointless. Worse, if time is at the essence, some product owner might ask you to deploy "just an MVP" before that subtask is done, postponing it forever.

The only reason to break down a task is when there's a time-like dependency in the process. E.g., I can do X after Y gets deployed. Or I need to remove Z once Q is done. Mostly, tasks like Y or Q will involve operational aspects (data migration, experiments) so the subtask will have a different owner or at least a different context.

replies(1): >>28668121 #
14. umanwizard ◴[] No.28667983{3}[source]
“Order of magnitude” is a colloquial, imprecise term that doesn’t actually mean “exactly ten” (otherwise, people would just say “ten”).
replies(1): >>28668093 #
15. Scarblac ◴[] No.28667993[source]
> A better approach would be to break it down as much as possible.

Better still is to do that, and then multiply the end result by π.

(as explained in the article, it's mostly to compensate for scope creep, and that isn't in your list of benefits)

16. janto ◴[] No.28667996{5}[source]
sounds about right
17. atoav ◴[] No.28667999[source]
For predicting the daily schedules on a film set I always "ran a simulation" of what would be done that day and just summed the predicted minutes. The simulation ran in my head of course, but it included things like: Actors drinking coffee and chatting, costumes getting ready, Camera department forgot memory card in the car, lunch breaks, someone arrives late, etc.

Obviously the major chunk were always scenes and they are usually also the major contributor to the insecurity of the prediction. E.g. working with people who you don't know, weather, technical problems (broken, missing stuff), stuff that just won't work (animals, certain scenes with actors).

But in the end what always mattered was that there was a time plan for each day and at the end of a day we would know wheter we are A) faster as predicted, B) on time or C) slower than predicted. The next day would then be restructured accordingly by the production and usually you'd be back on time by the end of that.

I was usually spot on with my predictions and we never had any issue with getting the planned stuff done.

With programming the whole thing is harder, because it can be even more unpredictable. But what definitly always helps is when you have a feeling for whether you are too slow, on time or you managed to build a time buffer for future doom.

replies(2): >>28669348 #>>28672442 #
18. telotortium ◴[] No.28668041[source]
3 is the Biblical value for pi, so that's okay.
replies(1): >>28672614 #
19. kornakar ◴[] No.28668066[source]
This reminds me of my game development job I had years back.

I was new to the field (but not new to software development) and there was this small software team doing programming tasks for the game. The lead developer was concerned on my performance after a few months I was there.

I remember him drawing an image excatcly like the second picture in this article (an arrow going from A to B). He said that my performance was very poor, and then he drew another picture that was like the circle in the article.

The way I worked was searching for a solution, going wrong direction a few times, asking designers for more information and then eventually landing on a solution (that worked, and users like it).

But I was told this is wrong way of doing software. I was not supposed to ask advice from the users (because the team "knew better").

He also told me that a good software developer takes a task, solves it (goes from A to B), and then takes another task.

After a few weeks I was fired from that job.

To this day I'm still baffled by this. The company was really succesfull and everyone knew how to make software. It seemed like a very harsh environment. Is it like this in the top software companies everywhere? Like the super-pro-developers really just take a task and solve it without issues?

replies(12): >>28668113 #>>28668137 #>>28668138 #>>28668139 #>>28668191 #>>28668254 #>>28668778 #>>28669170 #>>28669289 #>>28669499 #>>28669533 #>>28669735 #
20. okamiueru ◴[] No.28668093{4}[source]
In an engineering context, I would interpret it to be "roughly ten times", because that is what it means. And the same argument,that if they meant something else, would just say "three", or what have you.

In this case h-d is 8x, d-w is 5x, w-m is 4x and m-y is 12x.

This is after all just unimportant pedantic, but if clear and simple communication is the goal, it probably makes more sense to describe it as the next calendar unit up.

replies(2): >>28668334 #>>28668731 #
21. janto ◴[] No.28668098[source]
That's the kind of shortest path you'd get if two tasks appear on the straight line and you want to keep a minimum distance away from them :P
22. xupybd ◴[] No.28668113[source]
I've heard game development is pretty ruthless.
23. entrep ◴[] No.28668121{3}[source]
Developer (and team lead)

Surprised by the downvotes. I'm just sharing what I've learned the hard way over the last 6 years.

We can easily tell that the sprints where we managed to break something down further, has been better estimated and more productive.

Call it micromanagement if you'd like. What is working for one team might not work for another.

replies(1): >>28672460 #
24. necovek ◴[] No.28668137[source]
There are tasks you are experienced with that you can pretty much jump on and just do them, but there's also the other 90%.

What's weirdest about your story is that you were laid off a few weeks into your job. People usually get more time to get the hang of it even if they are senior, so I would mostly assume that it was a cultural fit rather than performance.

Some outsourcing/service (billed by the hour — which explains it for the most part) companies would look for very strict delivery cadence with focus on exactly the process you describe, but you'd be unlikely to have contact with the user in that case (just the customer).

Most importantly, if you ever get fired, be sure to ask for the explanation so you don't end up being baffled.

replies(1): >>28668823 #
25. resonious ◴[] No.28668138[source]
In my own experience (and opinion), your style of development often results in better quality code, less bugs, and cleaner UX. The tradeoff, as you experienced, is time.

I can also tell you that maximum code quality is not always the priority. This is especially true in games, where you ship and then move onto the next project. Even online games nowadays often shut down after not long, and so they don't need quite as much maintenance as a successful SAAS.

Again in my experience, the super-pro devs I know are particularly good at knowing when to be fast and reckless, and when to be slow and calculating.

replies(3): >>28668743 #>>28669135 #>>28670186 #
26. oakfr ◴[] No.28668139[source]
You most likely ran into a bad lead. His/her role was to coach & mentor you to help you become a better engineer, not just to draw diagrams on a white board.

"everyone knew how to make software" -> if that was the case, you should have found help and support around you organically.

I hope you're in a better situation now. Don't let this kind of experience hurt your self-esteem. You deserved a better place.

27. a_c ◴[] No.28668191[source]
The straight line from point a -> b can only be drawn backwards, i.e. from hindsight. When you zoom out, we all goes in circle/iteration. It is true that in some domains some engineers might have better insight than their users, but not all domains. It is important to keep the humility or be conscious that there exist an unknown world to us.

If your former team lead went from a -> b with no problem, maybe he is part of the circle. Afterall, when you zoom in the circle close enough, it appears a straight line. Or he could be a visionary, I can't tell.

An organization is like a shark, it needs to move forward. That organization can be an one-man team, where thinking overlaps with doing. In larger org, specialization is inevitable. The "no question ask, get things done" way that your former team lead adopted is desirable to certain degree, some of the time. But it is important to be aware of the duality. "Disagree but commit" is a virtue.

A fish can teach you swim, but it probably doesn't know it is in water

28. jiggawatts ◴[] No.28668254[source]
I've noticed that there are some managers that want their underlings to be just "meat robots" that do as they told without question. Other managers value independent thinkers that can be given an abstract task and find a solution, even if that solution ends up being very different to what was originally envisaged.

The strange thing is that both approaches can be successful!

However, they are not compatible. At most, it's possible to have an out-of-the-box thinker above a team of robots, but that's about the maximum extent of mixing that works.

For example, in the robot-based team, managers generally don't explain any of the inputs and constraints that went into a decision to their underlings. This saves time, allowing them to manage more people to get more done. However, the creative types question everything, which irritates and frustrates them. In an all-creative team, managers will explain the backstory to the junior staff, giving them the information they require to seek alternative but compatible solutions. This takes longer, but then they can offload a lot of the decision making to the juniors.

Don't feel bad that you didn't fit in, it just means that you need to find a place with a corporate culture that suits you better.

replies(2): >>28669278 #>>28669656 #
29. capableweb ◴[] No.28668272[source]
This is good advice but forgets that people are part of how long time something will take as well. If it's one engineer, it'll probably take $estimation * π, but if it's two engineers, you probably need to double that. If it's three, triple it. New calculation should be something like "($estimation * π) * $peopleWorkingOnSameThing". Communication overhead and mistakes should not be underestimated.
replies(4): >>28668376 #>>28668394 #>>28668452 #>>28688773 #
30. ◴[] No.28668329[source]
31. janto ◴[] No.28668334{5}[source]
For interest sake, it looks like non-decimal reference values have been used https://en.m.wikipedia.org/wiki/Order_of_magnitude
replies(1): >>28668913 #
32. TeMPOraL ◴[] No.28668349[source]
Aerospace says π, but they do separate normalization of time and costs:

> 29. (von Tiesenhausen's Law of Program Management) To get an accurate estimate of final program requirements, multiply the initial time estimates by pi, and slide the decimal point on the cost estimates one place to the right.

https://spacecraft.ssl.umd.edu/old_site/academics/akins_laws...

33. notimetorelax ◴[] No.28668376[source]
I agree that communication overhead exists but I think you’re ignoring the benefit from having 2 people working through a problem vs one person. I’ve seen much faster results because people don’t tend less to get stuck if pairing with someone.
replies(1): >>28668475 #
34. Aeolun ◴[] No.28668394[source]
Wait, so you are saying that the net positive effect of adding an extra engineer is zero?
replies(3): >>28668455 #>>28668719 #>>28669416 #
35. lifeisstillgood ◴[] No.28668420[source]
I am going to jump in an defend this. Mostly.

Eisenhower's quote "Plans are useless, planning is essential" matters here. Having clear checklist of things to do before release (same as a surgeon or pilot) is absolutely useful in realising just how much f######g work is ahead of you.

This helps the team slow down and be less optimistic (the developers natural state).

Yes I agree with below comments that the code is the source of truth (and you need to find ways to sync from the code to other management systems (like tickets). The worse this sync is the less software orientated a company is.

But being able to link your commits to some "ID number" is totally helpful in this regard. No matter what we need to co-ordinate with others in the org, and that co-ordination had better be automated 95% of the time or we are in a world of meetings.

But yes, the essence of the article is "you can never predict the unknown, so multiply by X as a buffer". [#]

The distinction between the article and the commenter is the article hopes to compensate for "unknown future events" and the commenter believes he can reduce his own amount of accidental estimation mistakes. Both can co-exist.

It is reasonable to take into account all our own mental failures when making estimates (and the commentators checklist style can help improve that) - but you cannot take into account the inherent problems (hence using pi).

I would even go so far as to say if you just do quick estimates, multiply by 9, if you take care in your estimates, you can just multiply by 3.

[#] The organisational problem with this is it is fine for the estimators to multiply up, by the more humans in the reporting chain, the more they add buffers (every project manager knows to multiply estimates by 3) so very soon buffers get wildly out of hand. Having things like JIRA where the estimators directly report their estiamtes (and senior mgmt gets the sum) has drastically changed the reliability of through organisational reporting and is leading to the death of project management as a profession.

replies(1): >>28669475 #
36. Borrible ◴[] No.28668448[source]
Yes, but that's only half of the software development circle.
37. mdp2021 ◴[] No.28668449{4}[source]
I would say it is a "superset timeframe".
38. ◴[] No.28668452[source]
39. capableweb ◴[] No.28668455{3}[source]
Yes, more engineers working on the same problem = slower time to actually releasing anything. Unless you can find a way to divide the task into parts that don't impede the others, adding more people will make things take longer time.
40. anotheryou ◴[] No.28668457[source]
Things I include that are easy to identify:

[my honest best estimate thinking off all I can think of] * X

   +1 on the factor for any of these:
   - always +1 right away (so it's x2)
   - tech is new for you
   - other teams involved
   - you don't have a clear idea of what tasks are involved (it's not yet another X)
   - you see additional risks
So if it's making a yet another micro marketing website based on an pixel perfect photoshop design: x2

If the design is not pixel perfect or missing responsive: the customer is involved: x3

If you refactor stuff in to new tech and it changes apis to other teams: new tech, other people, risks, no clear tasks: x5

41. einpoklum ◴[] No.28668468[source]
My problem with multiplying my estimates by Pi, or any n > 1+epsilon, is that one's boss often looks at such estimates and asks "How can it take you so long to do just XYZ?" and they would rather, in effect, be given a shorter estimate initially and settle on delays than be given a more conservative/inflated estimate, even if in the latter case there is rarely a delay.
42. TeMPOraL ◴[] No.28668475{3}[source]
In my experience, that's pretty high-variance. The perfect result - one where you get anywhere from 1.5x to 10x boost of productivity - requires pairing people who communicate well, are on relatively similar skill level[0], can focus, and who both have a good day. No distracting personal issues, no emotional problems, or other kinds of things that make you unable to focus in presence of other people. If any of those conditions isn't met, pairing up will waste time.

There's some confounding involved, of course - for example, a good rapport with teammates can mitigate the effects of private issues. But my point is that you can't just pair people up and expect productivity to improve; it'll likely decrease, and it'll likely swing wildly from day to day.

--

[0] - With respect to division of work on the task. Like e.g. a long time ago I was in a gamedev competition with a team of artists. I was nowhere near as good a programmer as they were artists, but we decided to make an art-heavy game, so our workloads balanced out and we proceeded almost in lock step.

43. jyriand ◴[] No.28668503[source]
Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law
replies(1): >>28668918 #
44. ohthehugemanate ◴[] No.28668571[source]
The research on this stuff says that time estimates are extremely inaccurate and to be avoided. If you manage to stay inaccurate by ONLY a factor of 3.14, that is unusually high accuracy.

Much more accurate are estimates of difficulty or complexity (small medium large, weighted numbers, whatever). The manager can then make an average conversion rate to time (with error bars), and use that for medium/long term forecasting. This is demonstrated extremely accurate compared to time estimation (but it is useless and breaks if you apply time/points pressure to your team, or even let them think in terms of a time conversion rate).

This is not new research. Kahneman's Nobel prize about decisions under uncertainty was in 2002. Put away the superstitions and stop estimating with time!

replies(1): >>28673774 #
45. throwawaylinux ◴[] No.28668669[source]
I also don't know why this is getting downvotes or accusations of micro managing.

Maybe it is that you said break it down as much as possible rather than as much as necessary.

I hate administrative busy work and micromanagement, but planning is essential for almost any large project, and planning means breaking things down into smaller steps. Estimates are data that drives all planning, so if your estimates are really that far wrong you have to take steps to manage that issue, not just multiply by 3 and call that your estimate.

Breaking down tasks further absolutely can help estimates (within reason) and it can help identify other tasks or dependencies etc.

Sometimes a task is just really difficult to give an estimate for no matter how much it is broken down. Sometimes you don't even know how to break it down or what exactly the steps along the way would be, you may never have done something similar before, you may be relying on development of novel techniques.

Still, in those cases you still don't just multiply by 3 and call that your estimate. You have to call out that as a major risk in your plans, and the project may have to be changed accordingly. For example it might be decided that in fact the company won't start making commitments or wider plans on the success of this project, but rather run a prototype project instead.

46. wruza ◴[] No.28668719{3}[source]
It is usually net negative until 5-7 people and a leader who can think over everything and write/schedule/dispatch crystal-clear requirements. Teams of identical skills only reduce (or increase, depending on the definition) bus factors and smooth out refocusing time and stress, but they don’t add real productivity. Software development is a tetronimo game which turns into pentonimo and so on with extra members added.
47. evanb ◴[] No.28668731{5}[source]
> I would interpret it to be "roughly ten times", because that is what it means

Only if you work base-10!

48. kthejoker2 ◴[] No.28668743{3}[source]
My boss once told me ..

When they say 'Good, Fast, Cheap: pick 2', everybody always assumes the point is you have to pick Good and the choice is just between Fast and Cheap.

But Fast and Cheap has its place, ask any game developer ...

replies(1): >>28702124 #
49. mrzool ◴[] No.28668761[source]
> Oh, and that to-do list you made last weekend? It’s no coincidence you only got about a third of the things on the list done. ;-)

He nailed that one for sure.

50. stkni ◴[] No.28668778[source]
This is a great question! When I was a developer I used to develop software the way you were told not to. I did it that way because the requirements were always very vague and a lot of gaps needed to be filled. But when I was a project manager I thought that way of development is wasteful and would rather developers didn't do what I did.

So I suppose the real answer depends on the environment you're in. If the project is meant to be agile then you probably need a bit of interaction with the user to get the job done. If the project is not agile and all the requirements could be known up-front then you're probably wasting effort having programmers determine what those requirements are. IMHO.

51. bluenose69 ◴[] No.28668802{5}[source]
This is what I was taught: double the estimate, and increase to the next unit. It was mostly as a joke, though.

The wise scheme, as has been pointed out, is to adjust predictions during a project. If the task that was initially planned to take a day is routinely taking 2 days (or 2 hours), adjust future plans accordingly.

Higher-level managers are sometimes unhappy with this scheme, while middle-level ones value the accuracy of such predictor-corrector schemes.

This gets us into a related topic of the depth of management structures, and I think the military scheme (of each person directing a roughly fixed number of persons, so the number of levels is proportional to the log of workforce size) might be worth considering.

52. kornakar ◴[] No.28668823{3}[source]
To clarify I was laid off few weeks after the feedback. I was on the job for a few months.

I asked for the reason, and it was indeed the performance.

It was just the diagrams in the original article that reminded me of this. It just didn't make any sense to me that one would "just solve" a problem at hand without considering other options.

But in my (15 years of) experience the "pi factor" is indeed quite accurate as there is always something surprising that comes up along the way, be it specification changes or technical issues.

replies(1): >>28669489 #
53. flavius29663 ◴[] No.28668913{6}[source]
That is beyond the point, since the OP changed the magnitude with every size. If it was 8x at each iteration, we would've had less pedantic comments here
replies(1): >>28669226 #
54. Ahan515 ◴[] No.28668918[source]
So true!
55. ekianjo ◴[] No.28669036[source]
Rather than spending time in no so accurate planning its much better to break down long tasks into small ones and then measure continuously how fast you go through those and revise every day or every week your assumptions.
56. aasasd ◴[] No.28669078[source]
Two points to elaborate, already mentioned in the neighbour thread on estimation.

Firstly, programmers are notoriously and famously bad at estimation, so it's highly likely that you do need to find the multiplier for yourself. This illustration is something like fifteen years old by now: https://pbs.twimg.com/media/EyNRiYQXMAIshEj.jpg

Secondly, wise managers know to expect this, and do the multiplication themselves. But if you have managers who do the opposite and always scale the estimation back, expecting the results earlier that promised, then you need to use a second multiplier—which the manager will unknowingly annul, getting closer back to your actual figure. Probably something around 1.5 to 2.

However, if you need the second multiplier then you also gotta ask yourself why you're staying in that company.

57. still_grokking ◴[] No.28669135{3}[source]
The result of this "we don't need quality" line of thinking is that almost every AAA game release nowadays is a gigantic fiasco. Nobody wants actually to buy alpha quality shit any more!

I guess games could have much higher sales (especially when the thing is new and hot) if they wouldn't release utter garbage for the most time. Just go and ask people whether they're keen on buying a bunch of bugs for 60 to 80 bucks. Most people aren't. Only a very small group of die hard fans does this.

The whole indie scene wouldn't stand a chance likely if the big players were able to deliver stuff that works actually before the first couple of patches. It's a kind of joke that one or a few people can build much better games than multi-billion companies. The problem is the mindset at the later!

I'm not advocating for "maximum code quality" as you don't get that anywhere anyway for any reasonable price. But game releases are just far beyond any pain point. The usual advice is: Don't touch, don't buy, until they proved that they're willing to fix their mess!

The games industry would need to walk a really long distance before they could get rid again of this public perception. But they need to start somewhere. Otherwise their reputation will reach absolute zero real soon now. They're already almost there…

replies(2): >>28669658 #>>28702128 #
58. lrem ◴[] No.28669170[source]
I work in Google and it's nothing like this (as long as by "the users" you mean the two-four teams you interface with, not the one-two billion humans using the end product). Your approach to problems would fit right in and the described manager wouldn't keep managing for long.

On the other hand, I once mentored a person working in another company. I've been giving advice that I would give to a junior Googler. They got fired :(

59. KingOfCoders ◴[] No.28669181[source]
Software estimation is fractal. The closer you look, the more details you see.
replies(1): >>28679190 #
60. dghughes ◴[] No.28669188[source]
The Scotty principle. On Start Trek Scotty said to Geordi you never tell the Captain how long a job will take you always pad the time significantly.
61. ◴[] No.28669200[source]
62. janto ◴[] No.28669226{7}[source]
Having quotation marks around "order of magnitude" sure didn't stop pedantic comments. I'm not sure what can.
63. Guthur ◴[] No.28669236[source]
I have broader issues with estimates and deadlines but when pushed i follow a general rule.

If the problem is not some trivial configuration or otherwise mundane change I measure it in weekly units. With weekly units it's unlikely for you to have a blow out of more than 50% with out clearly seeing it coming whereas when you say it'll take a day or two you can so easily have a blow out of 100-200%.

replies(1): >>28669274 #
64. Stratoscope ◴[] No.28669242{3}[source]
If you're looking for precision when we're estimating, you may be in for a rough time.
65. jcutrell ◴[] No.28669274[source]
At a former job when I had some organizational influence, we started doing this and called them “team weeks.” My goal was to sell the week x team model to avoid constant streaming of tiny straggling tasks. (It was an agency.)

I think the model works, but didn’t totally match that agency. It’s a good theory though

66. Msw242 ◴[] No.28669278{3}[source]
I like how you didn't say "x is bad"

Though I feel that delegating decision-making scales way better

67. andreareina ◴[] No.28669282[source]
There was a claim I saw somewhere that the actual time required fits an exponential distribution with the estimate as the scale factor. Does anybody know the actual source, or a pointer thereto? This is assuming that the study actually exists and it isn't a "just so" story...
replies(1): >>28670673 #
68. dgb23 ◴[] No.28669285[source]
People tend to want to hear or give the shortest estimate that seemingly sounds feasible.

I don't do that anymore. I try to push estimates as high as possible and then collaboratively cut down on requirements/promises/features to match an expected (time) budget.

This often leads to more pragmatic work items and sensible prioritization from the start. And it is an opportunity for general communication and understanding the value of things.

replies(3): >>28669314 #>>28669412 #>>28671220 #
69. inDigiNeous ◴[] No.28669289[source]
I've met similar thinking. In one company I was working for some time in a senior role, while being new there, I got scoffed for asking the non-senior developers of the team for information on some parts of how the software worked.

Maybe you went over somebodys head, or maybe rank played a part in this. Some people also want you to follor their way or the highway, unfortunately I've seen this many times in different software companies.

Probably in this kind of teams execution is what matters, not thinking for yourself.

70. jcutrell ◴[] No.28669303[source]
The most important thing I’ve learned about estimation is that, especially in larger/more established companies, it really only matters in two basic scenarios:

- Order of magnitude differences - hard deadlines (like a public event you are supporting)

Usually estimations between two things don’t matter as much as prioritizing those two things. Work on the thing that matters the most. If it takes 2 days or 5 days, in most cases it won’t matter - it’s still what you would work on, so functionally estimation wasn’t needed.

If it is a difference of 2 days vs 2 weeks (or some meaningful magnitude to you) then you start having opportunity cost that factors into prioritizing.

Estimation is more often used as a tool for “control and prediction” - how many times have you had incredible success because you controlled or predicted an outcome?

71. jcutrell ◴[] No.28669314[source]
My theory is that this has to do with our difficulty dealing with delayed gratification.

Tell people a short number now = immediate praise, long term pain.

Tell people a higher estimate, immediate negative; finish in time = delayed gratification.

72. regularfry ◴[] No.28669348[source]
Tom DeMarco talks about modelling-based estimates in Waltzing With Bears, mainly to break people out of the error they fall into of treating the soonest possible time something could be done as a realistic estimate of when something will actually be finished. There are also approaches like Function Point Analysis which provide an explicit model that you can calibrate your team against.

It's doable, but what people tend to forget is that it's work. If you want an estimate, I need to be able to expend effort providing it. It's an engineering activity that needs organisational support to do it at all well, but often you find an expectation that people will be able to pull an estimate out of a hat just having heard the faintest description of the problem, and there can often be a tacit belief (usually but not entirely from non-technical folks) that not being able to do so makes one incompetent.

replies(1): >>28669973 #
73. kilroy123 ◴[] No.28669412[source]
I agree but you can often miss out on work if you do this as a freelancer.

Last potential customer I spoke with I did this. They were insulted by my estimate and practically hung up the phone on me.

replies(1): >>28670965 #
74. hemric ◴[] No.28669416{3}[source]
In my experience it may most of the time take longer time to have stuff done BUT what is implemented will be better engineered and robuster. As long as you do some pair-programming and pair code review from time to time, the fact that you must develop a system that is understandable by more people leads to a better architecture.
75. hvidgaard ◴[] No.28669475{3}[source]
Most programming can be made modular in a way that many, even all some times, functional requirements can be made distinctly areas of the codebase. It makes sense to break them down to singular subtasks for making various requirements explicit. Add general things such as documentation, deployment etc, and you're able to provide far better estimates.

Through retrospectives teams slowly but surely create a checklist of things to consider when estimating and my experience is that teams get better and better at anticipating the scope of a task and estimating it.

76. necovek ◴[] No.28669489{4}[source]
To me, 3x is not "accurate" at all: it might be the approximate average, but I've worked on tasks that take anywhere from 0.1x to 10x the original estimate (or rather, a guess). Some were even infinite, in that they were scrapped when the real cost was uncovered.
replies(1): >>28669710 #
77. lloydatkinson ◴[] No.28669499[source]
I worked at an "agency" that could be described exactly like that, though they were not as successful and additionally have a bad reputation amongst recruiters and the local tech scene as a result of the gaslighting, unreasonable expectations, and deliberately moving goal posts in order to fluster people and have "reason" to fire them.

It seems that these toxic environments thrive when the top level of the management encourages this behaviour.

78. Igelau ◴[] No.28669533[source]
You showed them you're paying attention. That usually scares the crap out of people who have something to hide. The lesson here is that you were working in a den of vipers, and you're better off elsewhere.
79. larrydag ◴[] No.28669610[source]
One method of estimating projects is using the old method Activity Based Costing. This can be used to estimate time-related activities as well. The key is figuring out your total utilization of time cost estimates and per unit of time cost drivers.

https://hbr.org/2004/11/time-driven-activity-based-costing

80. SkipperCat ◴[] No.28669656{3}[source]
You're right, both approaches are important, but sometimes the task at hand requires one or the other - bot not both.

For example, when there is an abstract or undefined problem to be solved, the 'free thinking' people are super valuable because they can search for hidden requirements, think about edge cases and most importantly, challenge established ideas to come about with a better solution.

On the other hand, sometimes the solution is clearly known and you just need to grind it out. Think about the code that you've written over and over again (for us it is market data feed handlers) and there's nothing novel about it. Just gotta get the work done. I've seen some people try to reinvent the wheel for these tasks and its just not needed.

We had this joke at one company where we'd say "are you rewriting rsync?" because every once in a while someone would try to do something brand new and shiny when the tech was already defined and the parts needed to be assembled. Conversely, we also had some folks who did things that were incredibly creative and fresh. It's all about balance.

81. e3bc54b2 ◴[] No.28669658{4}[source]
> games could have much higher sales

Problem is, execs would like 300M revenue now, with a buggy PoS, than 500M after 6-12 months. Because those 300M can let them make another crappy game and release it 6 months earlier (12 if both skip on quality). Then you are missing 400M revenue, but you get 600M 12 months earlier. That recoups costs and looks nicer on reports.

Or in other words, gamers hate buggy releases, but not enough to change the practice.

82. SkipperCat ◴[] No.28669683[source]
A good project manager helps flatten the curve. The most important thing they can do is prevent engineers from having to be 'specification detectives', where the engineer has to search other business groups for the actual details of the deliverables.

I've always been happiest working in places where I focused my efforts diving into the technology rather than interviewing salespeople to find out exactly what they promised the customer.

83. kornakar ◴[] No.28669710{5}[source]
That's true, the 3.14 is a good average to start with
84. siva7 ◴[] No.28669735[source]
Nope, this ticks for me all signs of a bad/incompetent team lead. I’ve seen lots of teams in my career. I certainly wouldn’t promote or hire someone for team lead if answered the question like this guy about what makes a good software developer. This is a sign for management by metrics which works short-term but ultimately leads to demise of the product.
85. booleandilemma ◴[] No.28669827[source]
Yeah, I could see my team cargo culting this.
86. brightball ◴[] No.28669858[source]
Every time I’ve heard somebody say this, usually in management, it has been because they don’t take into account their multiple disruptions of the developers throughout the process.

Tends to be a big signal for lack of personal accountability from the person saying it.

87. kqr ◴[] No.28669973{3}[source]
This is where range-based estimations really shine. If you want an estimation right now, I will tell you on the spot that, "I'm 95 % certain it will be done no later than nine months from now, but probably sooner. However, I know it won't be done this week."

You want a narrower range than 0.25–9 months? You'll have to let me think about it. Maybe I can be just as certain that it will be done 1–5 months from now, if I get time to mentally run through the simulation, to borrow the terminology from upthread.

You want a narrower range than 1–5 months? I don't have the information I need to give you that. If you give me a couple of weeks to talk to the right people and start designing/implementing it, the next time we talk, maybe I have gotten the range down to 1–3 months.

I can always give you an honest range, but the more you let me work on it, the narrower it gets.

----

This is of course what's suggested in How To Measure Anything, Rapid Development, and any other text that treats estimation sensibly. An estimation has two components: location and uncertainty. You won't ever get around that, and by quoting a single number you're just setting yourself up for failure.

replies(5): >>28670533 #>>28672402 #>>28674059 #>>28674824 #>>28680172 #
88. jmnicolas ◴[] No.28670070[source]
I'm absolutely unable to give a reliable estimation so I use the "Duke Nukem Forever" method: when it's done.
89. taneq ◴[] No.28670186{3}[source]
> The tradeoff, as you experienced, is time.

This is true, but it's not the tradeoff most people think. It's not "this way is better but takes more time", it's "spend time now vs. spend time later."

In my experience, you will always spend the time. Spending it earlier can be difficult when you're on a tight schedule which is driving the process, but you'll always spend at least as much time later.

replies(2): >>28670532 #>>28671877 #
90. jcelerier ◴[] No.28670532{4}[source]
> In my experience, you will always spend the time.

not in mine, but I've seen a fair amount of very one-off, time-limited projects (the shortest being a client calling a morning for a very simple customized video playback app needed for that very evening, and a lot of it being only needed for a few months)

replies(1): >>28680140 #
91. jaymzcampbell ◴[] No.28670533{4}[source]
I've been finding this approach incredibly useful too with teams and trying to manage the needs and concerns of business vs product vs engineering. I quite liked how it's described here: https://spin.atomicobject.com/2009/01/14/making-better-estim...
replies(1): >>28670639 #
92. PerkinWarwick ◴[] No.28670535[source]
I can understand the need for a multiplier as a lot of estimates are based on (a)working continuously and (b)nothing going wrong.

My best estimates were more along the lines of 'how long did this take last time' and then building a schedule by subdividing that.

93. kqr ◴[] No.28670639{5}[source]
What I disagree with when it comes to fuzzy labels like "aggressive but possible" or "highly probable" is that they're still unverifiable and, frankly, just as meaningless as point estimates.

This is where actual probabilities come in: if you give me 90 % probability ranges (i.e. you think there's a 90 % chance the actual time taken will fall inside the range you give me) that provides me with three extremely powerful tools:

1. First of all, I can use Monte Carlo techniques to combine multiple such estimations in a way that makes sense. This can be useful e.g. to reduce uncertainty of an ensemble of estimations. You can't do that with fuzzy labels because one person's range will be a 50 % estimation and someone else's will be an 80 % one.

2. I can now work these ranges into economic calculations. The expected value of something is the probability times consequence. But it takes a probability.

3. Third, but perhaps even more important: I can now verify whether you're full of shit or not (okay, the nicer technical term is "whether you're well-calibrated or not".) If you keep giving me 90 % ranges, then you can be sure I'm going to collect these and make sure that historically, the actual time taken falls into that range nine out of ten times. If it's not, you are overconfident and can be trained to be less confident.

The last point is the real game changer. A point estimate, or an estimate based on fuzzy labels, cannot ever be verified.

Proper 90 % ranges (or whatever probability range you prefer) can be verified. Suddenly, you can start applying the scientific method to estimation. That's where you'll really take off.

replies(2): >>28671334 #>>28672487 #
94. jrh206 ◴[] No.28670673[source]
I think this is the article that you’re recalling: https://erikbern.com/2019/04/15/why-software-projects-take-l...
95. dgb23 ◴[] No.28670965{3}[source]
We can't read the mind of that potential customer. Did they want cheap, fast labor? Probably. Do you optimize for that? I assume not. You might have dodged a bullet or maybe there was a communication issue.
96. saalweachter ◴[] No.28671220[source]
I will sometimes give an estimate in multiple ways -- "This takes about four weeks for the engineering work, but unless we remove all other priorities it will take about 3 months to complete."

A lot of the time when you are working the delays aren't just the uncertainty of an individual task; it's that you're working on several projects at once, you're attending meetings, etc, so that you might only average 2 hours / day working on a particular task or you might not be able to start a task for several weeks (depending on whether you work sequentially or in parallel).

You still need a realistic, achievable estimate for that first time, in case management calls your bluff, but distinguishing between "the amount of effort this will take" and "how long it will be before it is complete" can help set realistic expectations while making it harder for management to mistakenly think you need two months to complete a task that could be done in two weeks.

replies(1): >>28671257 #
97. dgb23 ◴[] No.28671257{3}[source]
Oh yes, the ETA should always be significantly higher than the work estimate. I sometimes say I work in my sleep.
98. jaymzcampbell ◴[] No.28671334{6}[source]
I understood the fuzzy labels to still only refer to a specific probability range, e.g. the meaning of "aggressive but possible" to relate to the likes of your I'm 95 % certain it will be done no later than nine months from now... example. Those labels seemed to at least help explain "this isn't just a high ball figure".

To be honest I still don't really think any of this stuff can be truly verified beyond actually doing it or having a very well understood set of requirements that have been worked against plenty of times before.

replies(1): >>28671818 #
99. kqr ◴[] No.28671818{7}[source]
Sure, but it's important to spell out exactly which probability range they refer to -- unless you ground people in concrete numbers, they have a tendency to think they mean the same thing but actually mean very different things. (This is known as the illusion of agreement, for further reference.)

About verification I think you're right in a very specific sense: you clearly cannot verify that any single estimation is correct, range or not. However, meteorologists and other people dealing with inherent and impenetrable uncertainty have found out that a historic record of accuracy is as good as verification.

100. aidenn0 ◴[] No.28671877{4}[source]
You only spend the time later if you're still working on that project later. It could have failed, done it's job, or you could have moved on to the next project leaving some other poor schmuck to spend the time later.
101. regularfry ◴[] No.28672402{4}[source]
Absolutely, yes, and if you're in an organisation that's mature enough to handle ranges responsibly and not treat the lower number as a prediction, that's absolutely the best way to do it.
replies(1): >>28673737 #
102. frazbin ◴[] No.28672442[source]
dang, software planning is harder than herding hundreds of entertainment people? Not what I would have expected! I always assumed the 'unknown unknowns' were much larger in real life enterprises than in software ones, and that'd make planning harder. A lot of advantages come from software being made out of formal systems.
replies(2): >>28673825 #>>28676557 #
103. SpicyLemonZest ◴[] No.28672460{4}[source]
I've had that experience too, but it's become increasingly clear to me over time that the causation flows the other direction. The sprints that feel more productive are the ones that happened to have a lot of break-down-able tasks; the sprints that feel less productive are the ones where you have an important but hard problem with no clear solutions or break points. That all works out fine for teams that have to do a lot of greenfield development, sure, but does it reflect anything other than simple ease of planning?
replies(1): >>28672646 #
104. frazbin ◴[] No.28672487{6}[source]
mind blown
105. adamrezich ◴[] No.28672614{3}[source]
this is not exactly true; here's some additional context for the curious:

https://www.purplemath.com/modules/bibleval.htm

https://answersingenesis.org/contradictions-in-the-bible/as-...

replies(1): >>28672898 #
106. entrep ◴[] No.28672646{5}[source]
> The sprints that feel more productive are the ones that happened to have a lot of break-down-able tasks

That’s a great point which I will take with me. Might be somewhat biased since we only done greenfield.

107. ◴[] No.28672724[source]
108. telotortium ◴[] No.28672898{4}[source]
That's very interesting - thanks!
109. kqr ◴[] No.28673737{5}[source]
Whenever I speak to people who would do that, I leave the lower end of the range unspecified. (I.e. instead of 90 % between x and y, I phrase it as 95 % less than y.)
110. spacemark ◴[] No.28673774[source]
So basically, engineers estimate complexity and the manager converts that to time? Still sounds like estimating with time to me.
replies(1): >>28702158 #
111. kqr ◴[] No.28673825{3}[source]
Software has variability spanning multiple orders of magnitude. In entertainment, you might get one or two extras fewer or more than you needed, but you won't suddenly stand there with a hundred or thousand times more extras than you needed. Similarly, equipment will be hours or days away from where it's supposed to be, but you won't suddenly find out it got dumped on another planet.

Why does software have such extreme orders-of-magnitude variability? Anyone's guess. I like the perspective that software is made out of many pieces of little software, which are in turn made of even more smaller pieces of software. That fractal nature is a qualitative difference to people, which are not made of many tiny people. (As far as I know.)

112. jacobolus ◴[] No.28674059{4}[source]
I wish more people were willing to provide quick wide-interval estimates.

For instance, we have been working with a general contractor on a house remodel, and he refuses to give ballpark estimates (time or money) for anything, I think out of fear that we’ll later hold his guesses against him; if I want an estimate he’ll only reply with something fairly narrow after several days or a week, after putting in unnecessarily rigorous effort.

Since we don’t know the field and he doesn’t perfectly understand our priorities and preferences, this slow feedback loop is very frustrating: it prevents us from iterating and exploring the the space of possibilities, wastes his time precisely estimating stuff that we could make decisions about using a rough estimate, and wastes our time trying to get estimates from other sources which have less knowledge of the context.

113. composer ◴[] No.28674824{4}[source]
> An estimation has two components: location and uncertainty

ReRe's Law of Repetition and Redundancy [5] could benefit from a refinement that accounts for the inverse relationship between width-of-delivery-window and certainty-of-delivery-date... maybe:

  A programmer can accurately estimate the schedule for only the repeated and the redundant. Yet,
  A programmer's job is to automate the repeated and the redundant. Thus,
  A programmer delivering to an estimated or predictable schedule is...
  Not doing their job (or is redundant).
[5] https://news.ycombinator.com/item?id=25826476
114. atoav ◴[] No.28676557{3}[source]
Shooting a film is very much an act of discipline to a nearly militaristic degree if you are on a good set and the film isn't trying crazy new experimental things.

You can know really well how long a light person will take to do a thing, or how long the camera will take to find an angle. Knowing when your client will move their arse to send you the test_data.csv is very much unknowable.

115. redler ◴[] No.28679190[source]
I've found this "story" by Michael Wolfe [1] (warning: Quora link that you may need to open in an incognito window) to be a fun way to explain why accurate software estimation is so difficult.

[1] https://www.quora.com/Why-are-software-development-task-esti...

replies(1): >>28703315 #
116. taneq ◴[] No.28680140{5}[source]
True, that’s an exception. It reminds me of the way some missile control software never bothers to free memory because by the time it runs out of RAM it’s already exploded…
117. dwd ◴[] No.28680172{4}[source]
I still don't understand why Spolsky's Evidence Based Scheduling didn't get much traction. Defining the completion date as a range of probabilities makes the most sense.

https://www.joelonsoftware.com/2007/10/26/evidence-based-sch...

As implemented in Fogbugz:

https://fogbugz.com/evidence-based-scheduling/

118. RedBeetDeadpool ◴[] No.28688773[source]
Don't forget politics. Depending on the team, rework can be demanded by some alpha male positioning for power, just so he can assert his dominance.
119. RedBeetDeadpool ◴[] No.28688883[source]
This seems to follow my experience, although I attributed it to the Pareto principle.

80% of the feature will take 20% of the time. The last 20% will take 80% of the time once you factor in all the other things like:

  - getting the feature working correctly to all specifications.
  - thinking about larger scale impact of your code and how your team might react to it, and deciding on which implementation to take.
  - resolving missing edge cases.
  - previously undescribed features the design team overlooked, but is now obviously wrong now that they can see it in action.
  - time it takes to roundtrip the review process.
  - resolving unrelated bugs that code changes may have created.
  - writing tests.
  - manual testing.
  - fixing bugs found in regression tests.
  - deployment and deployment related errors (happens rarely, but you have to average it out).
Just getting to "I got this feature working!", seems to only be a small part of the whole process.
120. girvo ◴[] No.28702124{4}[source]
Thats because "Good" is a subjective ideal, and one that can be applied to different axes.

Good for game-development might be "a good game", not "good code", for example.

(I know you likely know this, but it's an interesting discussion point I think!)

121. girvo ◴[] No.28702128{4}[source]
> Nobody wants actually to buy alpha quality shit any more!

Sadly, I do not think that is true. Plenty of those games continue to sell in huge numbers.

122. girvo ◴[] No.28702158{3}[source]
> manager converts that to time

Based on past data, I think. But yeah it's still not great IMO, even if it might be slightly more accurate in some circumstances.

123. KingOfCoders ◴[] No.28703315{3}[source]
"My" idea was also inspired by coast lines, in the initial examples for fractals with the English coastline.

And I often use coastlines as an example to explain why software development is fractal. Will use your link in the future, thanks!