I feel that we need to have a "fuzzy logic" approach to our work.
However, that works best, when the engineer is somewhat experienced.
If they are inexperienced (even if very skilled and intelligent), we need to be a lot more dictatorial.
I feel that we need to have a "fuzzy logic" approach to our work.
However, that works best, when the engineer is somewhat experienced.
If they are inexperienced (even if very skilled and intelligent), we need to be a lot more dictatorial.
TL;DR, It basically means not having “hard and fast” boundaries, and instead, having ranges of target values, and “rules” for determining target states, as opposed to “milestones,” so targets are determined on a “one at a time” basis.
This helps me prepare for different scenarios and then build on top of whatever opportunity comes along.
I got reminded of it when I read "target states" and so thought will share it.
I wrote about how I think about the future here: https://jjude.com/shape-the-future/
The term the ancients had for this was paying attention to the "weakest precondition".
But then you are not, optimizations are exactly what that name sounds. You usually need to max some goal while you min some weakly correlated one, what sounds similar, but you can pick exactly what "preconditions" you will optimize against. You don't need to cover them all.
I think we must be interpreting that phrase differently?
Otherwise[1] I'd claim the opposite: when playing against an opponent, one ought merely retain an advantage, which is a weaker predicate than even the weakest liberal precondition, but when playing against entropy (the sheer bloody-mindedness, or at least sufficiently advanced ineptitude, of one's users; or the yolo-tude of whatever provided their data; etc.), especially at several GHz on multiple cores, one should ensure the strict WP.
[0] https://en.wikipedia.org/wiki/Predicate_transformer_semantic... (I'm probably missing some subtlety, but for practical purposes I find reading "set" for "predicate" and "relation" for "predicate transformer" suffices)
[1] unless you're one of those (hopefully rare) devs who always produce fault-tolerant systems — under the principle that ultimately users can be relied upon to tolerate the faults.
In fuzzy terms (because boolean logic gets crazy with those concepts):
We have success "S", with preconditions "P0, P1, ...", so that S = P0 & P1 & ...
We can map those concepts into their probability, where the probability of success would be "s = p0 * p1 * ...". AFAIK, your rule is that the best place to optimize is the lowest pN.
That would only be true if optimizing for any of those preconditions had similar costs and values. But on business, those things both tend to vary wildly, and the entire thing tends to get dominated by preconditions that you can't control (infinite costs) very quickly once you achieve a minimum of competence.
Also, the formalism doesn't accept changes on the definition of "success". You will get absolutely nowhere in life if you don't constantly change your definition of success, so the formalism is irredeemably wrong by construction.
We are talking about different things.