←back to thread

602 points emrah | 2 comments | | HN request time: 0.55s | source
Show context
mark_l_watson ◴[] No.43745755[source]
Indeed!! I have swapped out qwen2.5 for gemma3:27b-it-qat using Ollama for routine work on my 32G memory Mac.

gemma3:27b-it-qat with open-codex, running locally, is just amazingly useful, not only for Python dev, but for Haskell and Common Lisp also.

I still like Gemini 2.5 Pro and o3 for brainstorming or working on difficult problems, but for routine work it (simply) makes me feel good to have everything open source/weights running on my own system.

Wen I bought my 32G Mac a year ago, I didn't expect to be so happy as running gemma3:27b-it-qat with open-codex locally.

replies(3): >>43750006 #>>43750021 #>>43750815 #
1. pantulis ◴[] No.43750815[source]
How did you manage to run open-codex against a local ollama? I keep getting 400 Errors no matter what I try with the --provider and --model options.
replies(1): >>43751321 #
2. pantulis ◴[] No.43751321[source]
Never mind, found your Leanpub book and followed the instructions and at least I have it running with qwen-2.5. I'll investigate what happens with Gemma.