If you are in an even more "approximate" mindset (as opposed to propagating by simulation to get real world re-sampled skewed distributions, as often happens in experimental physics labs, or at least their undergraduate courses), there is an error propagation (https://en.wikipedia.org/wiki/Propagation_of_uncertainty) simplification for "small" errors thing you can do. Then translating "root" errors to "downstream errors" is just simple chain rule calculus stuff. (There is a Nim library for that at https://github.com/SciNim/Measuremancer that I use at least every week or two - whenever I'm timing anything.)
It usually takes some "finesse" to get your data / measurements into territory where the errors are even small in the first place. So, I think it is probably better to do things like this Uncertain<T> for the kinds of long/fat/heavy tailed and oddly shaped distributions that occur in real world data { IF the expense doesn't get in your way some other way, that is, as per Senior Engineer in the article }.