←back to thread

Devstral

(mistral.ai)
701 points mfiguiere | 1 comments | | HN request time: 0.209s | source
Show context
ics ◴[] No.44054028[source]
Maybe someone here can suggest tools or at least where to look; what are the state-of-the-art models to run locally on relatively low power machines like a MacBook Air? Is there anyone tracking what is feasible given a machine spec?

"Apple Intelligence" isn't it but it would be nice to know without churning through tests whether I should bother keeping around 2-3 models for specific tasks in ollama or if their performance is marginal there's a more stable all-rounder model.

replies(3): >>44054653 #>>44056458 #>>44058187 #
1. jwr ◴[] No.44058187[source]
I use qwen3:30b-a3b-q4_K_M for coding support and spam filtering, qwen2.5vl:32b-q4_K_M for image recognition/tagging/describing and sometimes gemma3:27b-it-qat for writing. All through Ollama, as that provides a unified interface, and then accessed from Emacs, command-line llm tool or my Clojure programs.

There is no single "best" model yet, it seems.

That's on an M4 Max with 64GB of RAM. I wish I had gotten the 128GB model, though — given that I run large docker containers that consume ~24GB of my RAM, things can get tight.