←back to thread

544 points tosh | 3 comments | | HN request time: 1.447s | source
Show context
jauntywundrkind ◴[] No.43464180[source]
Wish I knew better how to estimate what sized video card one needs. HuggingFace link says this is bfloat16, so at least 64GB?

I guess the -7B might run on my 16GB AMD card?

replies(4): >>43464207 #>>43464240 #>>43464303 #>>43464853 #
zamadatix ◴[] No.43464853[source]
https://huggingface.co/spaces/NyxKrage/LLM-Model-VRAM-Calcul...

That will help you quickly calculate the model VRAM usage as well as the VRAM usage of the context length you want to use. You can put "Qwen/Qwen2.5-VL-32B-Instruct" in the "Model (unquantized)" field. Funnily enough the calculator lacks the option to see without quantizing the model, usually because nobody worried about VRAM bothers running >8 bit quants.

replies(1): >>43465510 #
1. azinman2 ◴[] No.43465510[source]
Except when it comes to deepseek
replies(1): >>43466518 #
2. zamadatix ◴[] No.43466518[source]
For others not as familiar, this is pointing out DeepSeek-v3/DeepSeek-R1 are natively FP8 so selecting "Q8_0" aligns with not selecting quantization for that model (though you'll need ~1 TB of memory to use these model unquantized at full context). Importantly, this does not apply to the "DeepSeek" distills of other models, which retain natively being the same as the base model they distill.

I expect more and more worthwhile models to natively have <16 bit weights as time goes on but for the moment it's pretty much "8 bit DeepSeek and some research/testing models of various parameter width".

replies(1): >>43472502 #
3. azinman2 ◴[] No.43472502[source]
I wish deepseek distills were somehow branded differently. The amount of confusion I’ve come across from otherwise technical folk, or simply mislabeling (I’m running r1 on my MacBook!) is shocking. It’s my new pet peeve.