←back to thread

Devstral

(mistral.ai)
701 points mfiguiere | 1 comments | | HN request time: 0.2s | source
Show context
simonw ◴[] No.44053886[source]
The first number I look at these days is the file size via Ollama, which for this model is 14GB https://ollama.com/library/devstral/tags

I find that on my M2 Mac that number is a rough approximation to how much memory the model needs (usually plus about 10%) - which matters because I want to know how much RAM I will have left for running other applications.

Anything below 20GB tends not to interfere with the other stuff I'm running too much. This model looks promising!

replies(4): >>44054806 #>>44056502 #>>44059216 #>>44059888 #
1. rahimnathwani ◴[] No.44059216[source]
Almost all models listed in the ollama model library have a version that's under 20GB. But whether that's a 4-bit quantization (as in this case) or more/fewer bits varies.

AFAICT they usually set the default tag to sa version around 15GB.