←back to thread

352 points ferriswil | 5 comments | | HN request time: 0.664s | source
Show context
remexre ◴[] No.41889747[source]
Isn't this just taking advantage of "log(x) + log(y) = log(xy)"? The IEEE754 floating-point representation stores floats as sign, mantissa, and exponent -- ignore the first two (you quantitized anyway, right?), and the exponent is just an integer storing log() of the float.
replies(2): >>41889800 #>>41890236 #
1. mota7 ◴[] No.41890236[source]
Not quite: It's taking advantage of (1+a)(1+b) = 1 + a + b + ab. And where a and b are both small-ish, ab is really small and can just be ignored.

So it turns the (1+a)(1+b) into 1+a+b. Which is definitely not the same! But it turns out, machine guessing apparently doesn't care much about the difference.

replies(3): >>41890382 #>>41890513 #>>41892121 #
2. amelius ◴[] No.41890382[source]
You might then as well replace the multiplication by the addition in the original network. In that case you're not even approximating anything.

Am I missing something?

replies(1): >>41893129 #
3. tommiegannert ◴[] No.41890513[source]
Plus the 2^-l(m) correction term.

Feels like multiplication shouldn't be needed for convergence, just monotonicity? I wonder how well it would perform if the model was actually trained the same way.

4. dsv3099i ◴[] No.41892121[source]
This trick is used a ton when doing hand calculation in engineering as well. It can save a lot of work.

You're going to have tolerance on the result anyway, so what's a little more error. :)

5. dotnet00 ◴[] No.41893129[source]
They're applying that simplification to the exponent bits of an 8 bit float. The range is so small that the approximation to multiplication is going to be pretty close.