←back to thread

343 points sillysaurusx | 1 comments | | HN request time: 0s | source
Show context
v64 ◴[] No.35028738[source]
If anyone is interested in running this at home, please follow the llama-int8 project [1]. LLM.int8() is a recent development allowing LLMs to run in half the memory without loss of performance [2]. Note that at the end of [2]'s abstract, the authors state "This result makes such models much more accessible, for example making it possible to use OPT-175B/BLOOM on a single server with consumer GPUs. We open-source our software." I'm very thankful we have researchers like this further democratizing access to this data and prying it out of the hands of the gatekeepers who wish to monetize it.

[1] https://github.com/tloen/llama-int8

[2] https://arxiv.org/abs/2208.07339

replies(5): >>35028950 #>>35029068 #>>35029601 #>>35030214 #>>35030868 #
downvotetruth ◴[] No.35029068[source]
Eagerly awaiting the int8 vs 4 benchmarks. Also, it can run on CPU https://github.com/markasoftware/llama-cpu So, an int8 patch could allow the 65B to run on a standard 128 GB setup assuming the 65B model's cache bursts fit, which if I were to speculate is why the released models stop @ 65B & meta likely already has larger unreleased internal ones.
replies(1): >>35029092 #
v64 ◴[] No.35029092[source]
early int4 experiments seem to indicate it's possible but you do lose performance, see this thread https://www.reddit.com/r/MachineLearning/comments/11i4olx/d_...

edit: to clarify, it may be possible to get this loss back and there is reason to be optimistic

replies(1): >>35029917 #
CuriouslyC ◴[] No.35029917[source]
Probably the best method is to just train it on int4 in the first place. Fine tuning after quantization would definitely help though.
replies(2): >>35030116 #>>35034771 #
sp332 ◴[] No.35030116[source]
Isn't that backwards? You need fairly good resolution during training or your gradients will be pointing all over the place. Once you've found a good minimum point, moving a little away from it with reduced precision is probably OK.
replies(2): >>35030220 #>>35030326 #
brookst ◴[] No.35030220[source]
I have no idea what the right answer is, but I think the argument for int4 training is that the loss measurements would take the lower resolution of the model as a whole into account.

Is it better to have billions of high resolution parameters and quantize them at the end, or to train low resolution parameters where the training algorithms see the lower resolution? It’s beyond me, but I’d love to know.

replies(2): >>35031851 #>>35035673 #
1. bick_nyers ◴[] No.35035673{3}[source]
I think the answer is it depends, and further, a dynamic approach may be best. Imagine you are going on a hike, and you have different maps at various resolutions (levels of detail). When planning the hike, you will want to see the zoomed out picture to get general directions, elevations and landmarks identified. Then you can zoom in to see the actual trails themselves, to identify your route, and then you zoom in even further when you are on the ground actually walking, avoiding obstacles along the way.

Different resolutions draw your attention to different types of features.