←back to thread

288 points Twirrim | 1 comments | | HN request time: 0.274s | source
Show context
favorited ◴[] No.41875023[source]
Previously, in JF's "Can we acknowledge that every real computer works this way?" series: "Signed Integers are Two’s Complement" <https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p09...>
replies(1): >>41875200 #
jsheard ◴[] No.41875200[source]
Maybe specifying that floats are always IEEE floats should be next? Though that would obsolete this Linux kernel classic so maybe not.

https://github.com/torvalds/linux/blob/master/include/math-e...

replies(9): >>41875213 #>>41875351 #>>41875749 #>>41875859 #>>41876173 #>>41876461 #>>41876831 #>>41877394 #>>41877730 #
conradev ◴[] No.41876173[source]
I was curious about float16, and TIL that the 2008 revision of the standard includes it as an interchange format:

https://en.wikipedia.org/wiki/IEEE_754-2008_revision

replies(1): >>41877684 #
tialaramex ◴[] No.41877684[source]
Note that this type (which Rust will/ does in nightly call "f16" and a C-like language would probably name "half") is not the only popular 16-bit floating point type, as some people want to have https://en.wikipedia.org/wiki/Bfloat16_floating-point_format
replies(1): >>41886241 #
1. adrian_b ◴[] No.41886241[source]
The IEEE FP16 format is what is useful in graphics applications, e.g. for storing color values.

The Google BF16 format is useful strictly only for machine learning/AI applications, because its low precision is insufficient for anything else. BF16 has very low precision, but an exponent range equal to FP32, which makes overflows and underflows less likely.