←back to thread

80 points homarp | 1 comments | | HN request time: 0.983s | source
Show context
Lerc ◴[] No.44610720[source]
Has there been much research into slightly flawed matrix multiplications?

If you have a measure of correctness, and a measure of performance. Is there a maximum value of correctness per some unit of processing that exists below a full matrix multiply

Obviously it can be done with precision, since that is what floating point is. But is there anything where you can save x% of computation and have fewer than x% incorrect values in a matrix multiplications?

Gradient descent wouldn't really care about a few (Reliably) dud values.

replies(4): >>44610899 #>>44614746 #>>44614820 #>>44617249 #
1. wuubuu ◴[] No.44610899[source]
Randomized matrix sketching is one way to get at this (see https://arxiv.org/abs/2302.11474), the problem is hardware is heavily optimized for dense multiplies so what you save in flops doesn't translate to real runtime speeds ups.