←back to thread

111 points galeos | 1 comments | | HN request time: 0.243s | source
Show context
Havoc ◴[] No.43715393[source]
Is there a reason why the 1.58 ones are always aimed at quite small ones? Think I’ve seen an 8B but that’s about it.

Is there a technical reason for it or just research convenience ?

replies(2): >>43715453 #>>43717231 #
londons_explore ◴[] No.43715453[source]
I suspect because current GPU hardware can't efficiently train such low bit depth models. You end up needing activations to use 8 or 16 bits in all the data paths, and don't get any more throughput per cycle on the multiplications than you would have done with FP32.

Custom silicon would solve that, but nobody wants to build custom silicon for a data format that will go out of fashion before the production run is done.

replies(2): >>43715606 #>>43715705 #
1. zamadatix ◴[] No.43715705[source]
The custom CUDA kernel for 4-in-8 seems to have come out better than a naive approach (such as just treating each as an fp8/int8) + it lowers memory bandwidth. Custom hardware would certainly make that improvement even better but I don't think that's what's limiting training to 2-8 billion parameters as much as something like research convenience while the groundwork for this type of model is still being figured out.