https://github.com/torvalds/linux/blob/master/include/math-e...
After digging, I think this is the kind of thing I'm referring to:
https://people.eecs.berkeley.edu/~wkahan/JAVAhurt.pdf
https://news.ycombinator.com/item?id=37028310
I've seen other course notes, I think also from Kahan, extolling 80-bit hardware.
Personally I am starting to think that, if I'm really thinking about precision, I had maybe better just use fixed point, but this again is just a "lean" that could prove wrong over time. Somehow we use floats everywhere and it seems to work pretty well, almost unreasonably so.
Modern floating-point is much more reproducible than fixed-point, FWIW, since it has an actual standard that’s widely adopted, and these excess-precision issues do not apply to SSE or ARM FPUs.