Most active commenters
  • 082349872349872(3)

←back to thread

Focus on decisions, not tasks

(technicalwriting.dev)
293 points kaycebasques | 12 comments | | HN request time: 0.887s | source | bottom
1. ChrisMarshallNY ◴[] No.41883501[source]
This is what I generally mean by taking an "heuristic approach."

I feel that we need to have a "fuzzy logic" approach to our work.

However, that works best, when the engineer is somewhat experienced.

If they are inexperienced (even if very skilled and intelligent), we need to be a lot more dictatorial.

replies(2): >>41883516 #>>41883959 #
2. Swizec ◴[] No.41883516[source]
Thinking in Bets has been one of the most useful books to how I approach software engineering. It’s not even about code, just how to make decisions effectively in limited information environments.
replies(1): >>41883593 #
3. kaycebasques ◴[] No.41883593[source]
Love that book. Such a powerful idea to phrase your predictions in terms of percentages rather than absolutes. Apparently the Super Bowl anecdote is controversial though? I.e. the conclusions to draw from that anecdote are very debatable.
replies(1): >>41883677 #
4. Swizec ◴[] No.41883677{3}[source]
I don’t remember the specific anecdotes too much, but the lessons make intuitive sense and feel useful.

The one that sticks to mind most is that a good decision can have a bad outcome and that a good outcome doesn’t always mean the decision was good.

5. RossBencina ◴[] No.41883959[source]
I have no idea what you mean by taking a "fuzzy logic" approach to work. Could you expand and explain that a bit please?
replies(1): >>41885109 #
6. ChrisMarshallNY ◴[] No.41885109[source]
Well “fuzzy logic” is kind of a dated term. I don’t think it has been used in software development, for twenty years.

TL;DR, It basically means not having “hard and fast” boundaries, and instead, having ranges of target values, and “rules” for determining target states, as opposed to “milestones,” so targets are determined on a “one at a time” basis.

replies(1): >>41885340 #
7. jjude ◴[] No.41885340{3}[source]
When we think of future, mostly we think in a deterministic one single point in the future. I would like to think of future as "possible states" rather than just a single point.

This helps me prepare for different scenarios and then build on top of whatever opportunity comes along.

I got reminded of it when I read "target states" and so thought will share it.

I wrote about how I think about the future here: https://jjude.com/shape-the-future/

replies(1): >>41885946 #
8. 082349872349872 ◴[] No.41885946{4}[source]
> "The key to strategy, little Vor," she explained kindly, "is not to choose a path to victory, but to choose so that all paths lead to a victory." —LMB

The term the ancients had for this was paying attention to the "weakest precondition".

replies(1): >>41888374 #
9. marcosdumay ◴[] No.41888374{5}[source]
When you are playing against an opponent, yes, you need a min-max strategy.

But then you are not, optimizations are exactly what that name sounds. You usually need to max some goal while you min some weakly correlated one, what sounds similar, but you can pick exactly what "preconditions" you will optimize against. You don't need to cover them all.

replies(1): >>41889175 #
10. 082349872349872 ◴[] No.41889175{6}[source]
"Weakest precondition" is a term of art, specifically referring to predicate transformer semantics[0].

I think we must be interpreting that phrase differently?

Otherwise[1] I'd claim the opposite: when playing against an opponent, one ought merely retain an advantage, which is a weaker predicate than even the weakest liberal precondition, but when playing against entropy (the sheer bloody-mindedness, or at least sufficiently advanced ineptitude, of one's users; or the yolo-tude of whatever provided their data; etc.), especially at several GHz on multiple cores, one should ensure the strict WP.

[0] https://en.wikipedia.org/wiki/Predicate_transformer_semantic... (I'm probably missing some subtlety, but for practical purposes I find reading "set" for "predicate" and "relation" for "predicate transformer" suffices)

[1] unless you're one of those (hopefully rare) devs who always produce fault-tolerant systems — under the principle that ultimately users can be relied upon to tolerate the faults.

replies(1): >>41889352 #
11. marcosdumay ◴[] No.41889352{7}[source]
Ok. I don't think we are talking about different things, and I still think you got it the other way around.

In fuzzy terms (because boolean logic gets crazy with those concepts):

We have success "S", with preconditions "P0, P1, ...", so that S = P0 & P1 & ...

We can map those concepts into their probability, where the probability of success would be "s = p0 * p1 * ...". AFAIK, your rule is that the best place to optimize is the lowest pN.

That would only be true if optimizing for any of those preconditions had similar costs and values. But on business, those things both tend to vary wildly, and the entire thing tends to get dominated by preconditions that you can't control (infinite costs) very quickly once you achieve a minimum of competence.

Also, the formalism doesn't accept changes on the definition of "success". You will get absolutely nowhere in life if you don't constantly change your definition of success, so the formalism is irredeemably wrong by construction.

replies(1): >>41889740 #
12. 082349872349872 ◴[] No.41889740{8}[source]
> AFAIK, your rule is that the best place to optimize is the lowest pN.

We are talking about different things.