←back to thread

85 points homarp | 1 comments | | HN request time: 0.473s | source
Show context
Lerc ◴[] No.44610720[source]
Has there been much research into slightly flawed matrix multiplications?

If you have a measure of correctness, and a measure of performance. Is there a maximum value of correctness per some unit of processing that exists below a full matrix multiply

Obviously it can be done with precision, since that is what floating point is. But is there anything where you can save x% of computation and have fewer than x% incorrect values in a matrix multiplications?

Gradient descent wouldn't really care about a few (Reliably) dud values.

replies(4): >>44610899 #>>44614746 #>>44614820 #>>44617249 #
1. MayeulC ◴[] No.44614820[source]
Well, approximate computing seems to be a superset of the field you describe here, with many different approaches, including analog computation. As you say, some algorithms care a bit less about precision, especially for LSBs.