←back to thread

Devstral

(mistral.ai)
701 points mfiguiere | 7 comments | | HN request time: 0s | source | bottom
1. ics ◴[] No.44054028[source]
Maybe someone here can suggest tools or at least where to look; what are the state-of-the-art models to run locally on relatively low power machines like a MacBook Air? Is there anyone tracking what is feasible given a machine spec?

"Apple Intelligence" isn't it but it would be nice to know without churning through tests whether I should bother keeping around 2-3 models for specific tasks in ollama or if their performance is marginal there's a more stable all-rounder model.

replies(3): >>44054653 #>>44056458 #>>44058187 #
2. thatcherc ◴[] No.44054653[source]
I would recommend just trying it out! (as long as you have the disk space for a few models). llama.cpp[0] is pretty easy to download and build and has good support for M-series Macbook Airs. I usually just use LMStudio[1] though - it's got a nice and easy-to-use interface that looks like the ChatGPT or Claude webpage, and you can search for and download models from within the program. LMStudio would be the easiest way to get started and probably all you need. I use it a lot on my M2 Macbook Air and it's really handy.

[0] - https://github.com/ggml-org/llama.cpp

[1] - https://lmstudio.ai/

replies(1): >>44055956 #
3. Etheryte ◴[] No.44055956[source]
This doesn't do anything to answer the main question of what models they can actually run.
replies(1): >>44057233 #
4. Miraste ◴[] No.44056458[source]
The best general model you can run locally is probably some version of Gemma 3 or the latest Mistral Small. On a Windows machine, this is limited by VRAM, since system RAM is too low-bandwidth to run models at usable speeds. On an M-series Mac, the system memory is on-die and fast enough to use. What you can run will be the total RAM, minus whatever MacOS uses and the space you want for other programs.

To determine how much space a model needs, you look at the size of the quantized (lower precision) model on HuggingFace or wherever it's hosted. Q4_K_M is a good default. As a rough rule of thumb, this will be a little over half the size of the parameters, if they were in gigabytes. For Devstral, that's 14.3GB. You will also need 1-8GB more than that, to store the context.

For example: A 32GB Macbook Air could use Devstral at 14.3+4GB, leaving ~14GB for the system and applications. A 16GB Macbook Air could use Gemma 3 12B at 7.3+2GB, leaving ~7GB for everything else. An 8GB Macbook could use Gemma 3 4B at 2.5GB+1GB, but this is probably not worth doing.

replies(1): >>44059545 #
5. tuesdaynight ◴[] No.44057233{3}[source]
LM Studio will tell you if a specific model is small enough for your available RAM/VRAM.
6. jwr ◴[] No.44058187[source]
I use qwen3:30b-a3b-q4_K_M for coding support and spam filtering, qwen2.5vl:32b-q4_K_M for image recognition/tagging/describing and sometimes gemma3:27b-it-qat for writing. All through Ollama, as that provides a unified interface, and then accessed from Emacs, command-line llm tool or my Clojure programs.

There is no single "best" model yet, it seems.

That's on an M4 Max with 64GB of RAM. I wish I had gotten the 128GB model, though — given that I run large docker containers that consume ~24GB of my RAM, things can get tight.

7. visarga ◴[] No.44059545[source]
> An 8GB Macbook could use Gemma 3 4B at 2.5GB+1GB, but this is probably not worth doing.

I am currently using this model on a Macbook with 16GB ram, it is hooked up with a chrome extension that extracts text from webpages and logs to a file, then summarizes each page. I want to develop an episodic memory system, like MS Recall, but local, it does not leak my data to anyone else, and costs me nothing.

Gemma 3 4B runs under ollama and is light enough that I don't feel it while browsing. Summarization happens in the background. This page I am on is already logged and summarized.