So basically the idea is to pack 3 ternary weights (-1,0,1) into 5 bits instead of 6, but they compare the results with fp16 model which would use 48 bits for those 3 weights…
And speed up comes from the memory io, compensated a bit by the need to unpack these weights before using them…
Did I get this right?
replies(1):