←back to thread

602 points emrah | 1 comments | | HN request time: 0.001s | source
Show context
simonw ◴[] No.43743896[source]
I think gemma-3-27b-it-qat-4bit is my new favorite local model - or at least it's right up there with Mistral Small 3.1 24B.

I've been trying it on an M2 64GB via both Ollama and MLX. It's very, very good, and it only uses ~22Gb (via Ollama) or ~15GB (MLX) leaving plenty of memory for running other apps.

Some notes here: https://simonwillison.net/2025/Apr/19/gemma-3-qat-models/

Last night I had it write me a complete plugin for my LLM tool like this:

  llm install llm-mlx
  llm mlx download-model mlx-community/gemma-3-27b-it-qat-4bit

  llm -m mlx-community/gemma-3-27b-it-qat-4bit \
    -f https://raw.githubusercontent.com/simonw/llm-hacker-news/refs/heads/main/llm_hacker_news.py \
    -f https://raw.githubusercontent.com/simonw/tools/refs/heads/main/github-issue-to-markdown.html \
    -s 'Write a new fragments plugin in Python that registers
    issue:org/repo/123 which fetches that issue
        number from the specified github repo and uses the same
        markdown logic as the HTML page to turn that into a
        fragment'
It gave a solid response! https://gist.github.com/simonw/feccff6ce3254556b848c27333f52... - more notes here: https://simonwillison.net/2025/Apr/20/llm-fragments-github/
replies(11): >>43743949 #>>43744205 #>>43744215 #>>43745256 #>>43745751 #>>43746252 #>>43746789 #>>43747326 #>>43747968 #>>43752580 #>>43752951 #
littlestymaar ◴[] No.43745256[source]
> and it only uses ~22Gb (via Ollama) or ~15GB (MLX)

Why is the memory use different? Are you using different context size in both set-ups?

replies(2): >>43745420 #>>43754618 #
simonw ◴[] No.43745420[source]
No idea. MLX is its own thing, optimized for Apple Silicon. Ollama uses GGUFs.

https://ollama.com/library/gemma3:27b-it-qat says it's Q4_0. https://huggingface.co/mlx-community/gemma-3-27b-it-qat-4bit says it's 4bit. I think those are the same quantization?

replies(1): >>43749341 #
1. jychang ◴[] No.43749341[source]
Those are the same quant, but this is a good example of why you shouldn't use ollama. Either directly use llama.cpp, or use something like LM Studio if you want something with a GUI/easier user experience.

The Gemma 3 17b QAT GGUF should be taking up ~15gb, not 22gb.