←back to thread

577 points simonw | 1 comments | | HN request time: 0s | source
Show context
pulkitsh1234 ◴[] No.44723561[source]
Is there any website to see the minimum/recommended hardware required for running local LLMs? Much like 'system requirements' mentioned for games.
replies(5): >>44723575 #>>44724036 #>>44724407 #>>44724966 #>>44725488 #
svachalek ◴[] No.44725488[source]
In addition to the tools other people responded with, a good rule of thumb is that most local models work best* at q4 quants, meaning the memory for the model is a little over half the number of parameters, e.g. a 14b model may be 8gb. Add some more for context and maybe you want 10gb VRAM for a 14gb model. That will at least put you in the right ballpark for what models to consider for your hardware.

(*best performance/size ratio, generally if the model easily fits at q4 you're better off going to a higher parameter count than going for a larger quant, and vice versa)

replies(1): >>44726001 #
1. nottorp ◴[] No.44726001[source]
> maybe you want 10gb VRAM for a 14gb model

... or if you have Apple hardware with their unified memory, whatever the assholes soldered in is your limit.