Most active commenters

    ←back to thread

    182 points Twirrim | 19 comments | | HN request time: 0.612s | source | bottom
    Show context
    favorited ◴[] No.41875023[source]
    Previously, in JF's "Can we acknowledge that every real computer works this way?" series: "Signed Integers are Two’s Complement" <https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p09...>
    replies(1): >>41875200 #
    1. jsheard ◴[] No.41875200[source]
    Maybe specifying that floats are always IEEE floats should be next? Though that would obsolete this Linux kernel classic so maybe not.

    https://github.com/torvalds/linux/blob/master/include/math-e...

    replies(6): >>41875213 #>>41875351 #>>41875749 #>>41875859 #>>41876173 #>>41876461 #
    2. NL807 ◴[] No.41875213[source]
    Love it
    3. AnimalMuppet ◴[] No.41875351[source]
    That line is actually from a famous Dilbert cartoon.

    I found this snapshot of it, though it's not on the real Dilbert site: https://www.reddit.com/r/linux/comments/73in9/computer_holy_...

    replies(1): >>41875688 #
    4. Jerrrrrrry ◴[] No.41875688[source]
    This is the epitome, the climax, the crux, the ultimate, the holy grail, the crème de la crème of nerd sniping.

    fuckin bravo

    5. FooBarBizBazz ◴[] No.41875749[source]
    Whether double floats can silently have 80 bit accumulators is a controversial thing. Numerical analysis people like it. Computer science types seem not to because it's unpredictable. I lean towards, "we should have it, but it should be explicit", but this is not the most considered opinion. I think there's a legitimate reason why Intel included it in x87, and why DSPs include it.
    replies(2): >>41875950 #>>41876023 #
    6. jfbastien ◴[] No.41875859[source]
    Hi! I'm JF. I half-jokingly threatened to do IEEE float in 2018 https://youtu.be/JhUxIVf1qok?si=QxZN_fIU2Th8vhxv&t=3250

    I wouldn't want to lose the Linux humor tho!

    7. ◴[] No.41875950[source]
    8. stephencanon ◴[] No.41876023[source]
    Numerical analysis people do not like it. Having _explicitly controlled_ wider accumulation available is great. Having compilers deciding to do it for you or not in unpredictable ways is anathema.
    replies(1): >>41876108 #
    9. bee_rider ◴[] No.41876108{3}[source]
    It isn’t harmful, right? Just like getting a little accuracy from a fused multiply add. It just isn’t useful if you can’t depend on it.
    replies(3): >>41876218 #>>41876269 #>>41876272 #
    10. conradev ◴[] No.41876173[source]
    I was curious about float16, and TIL that the 2008 revision of the standard includes it as an interchange format:

    https://en.wikipedia.org/wiki/IEEE_754-2008_revision

    11. eternityforest ◴[] No.41876218{4}[source]
    I suppose it could be harmful if you write code that depends on it without realizing it, and then something changes so it stops doing that.
    12. Negitivefrags ◴[] No.41876269{4}[source]
    It can be harmful. In GCC while compiling a 32 bit executable, making an std::map< float, T > can cause infinite loops or crashes in your program.

    This is because when you insert a value into the map, it has 80 bit precision, and that number of bits is used when comparing the value you are inserting during the traversal of the tree.

    After the float is stored in the tree, it's clamped to 32 bits.

    This can cause the element to be inserted into into the wrong order in the tree, and this breaks the assumptions of the algorithm leaidng to the crash or infinite loop.

    Compiling for 64 bits or explicitly disabling x87 float math makes this problem go away.

    I have actually had this bug in production and it was very hard to track down.

    replies(3): >>41876310 #>>41876377 #>>41876406 #
    13. lf37300 ◴[] No.41876272{4}[source]
    If not done properly, double rounding (round to extended precision then rounding to working precision) can actually introduce larger approximation error than round to nearest working precision directly. So it can actually make some numerical algorithms perform worse.
    14. blt ◴[] No.41876310{5}[source]
    dang that's a good war story.
    15. ndesaulniers ◴[] No.41876377{5}[source]
    Are you mixing up long double with float?
    16. jfbastien ◴[] No.41876406{5}[source]
    10 years ago, a coworker had a really hard time root-causing a bug. I shoulder-debugged it by noticing the bit patterns: it was a miscompile of LLVM itself by GCC, where GCC was using an x87 fldl/fstpl move for a union { double; int64; }. The active member was actually the int64, and GCC chose FP moved based on what was the first member of the union... but the int64 happened to be the representation of SNaN, so the instructions transformed it quietly to a qNaN as part of moving. The "fix" was to change the order of the union's members in LLVM. The bug is still open, though it's had recent activity: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58416
    17. jcranmer ◴[] No.41876461[source]
    I'm literally giving a talk next week who's first slide is essentially "Why IEEE 754 is not a sufficient description of floating-point semantics" and I'm sitting here trying to figure out what needs to be thrown out of the talk to make it fit the time slot.

    One of the most surprising things about floating-point is that very little is actually IEEE 754; most things are merely IEEE 754-ish, and there's a long tail of fiddly things that are different that make it only -ish.

    replies(2): >>41876497 #>>41876510 #
    18. Terr_ ◴[] No.41876497[source]
    > there's a long tail of fiddly things that are different that make it only -ish.

    Perhaps a way to fill some time would be gradually revealing parts of a convoluted Venn diagram or mind-map of the fiddling things. (That is, assuming there's any sane categorization.)

    19. speedgoose ◴[] No.41876510[source]
    I'm interested by your future talk, do you plan to publish a video or a transcript?