←back to thread

MCP in LM Studio

(lmstudio.ai)
225 points yags | 6 comments | | HN request time: 0.613s | source | bottom
1. patates ◴[] No.44380448[source]
What models are you using on LM Studio for what task and with how much memory?

I have a 48GB macbook pro and Gemma3 (one of the abliterated ones) fits my non-code use case perfectly (generating crime stories which the reader tries to guess the killer).

For code, I still call Google to use Gemini.

replies(4): >>44380643 #>>44380718 #>>44381684 #>>44382987 #
2. ◴[] No.44380643[source]
3. ◴[] No.44380718[source]
4. robbru ◴[] No.44381684[source]
I've been using the Google Gemma QAT models in 4B, 12B, and 27B with LM Studio with my M1 Max. https://huggingface.co/lmstudio-community/gemma-3-12B-it-qat...
5. t1amat ◴[] No.44382987[source]
I would recommend Qwen3 30B A3B for you. The MLX 4bit DWQ quants are fantastic.
replies(1): >>44387235 #
6. redman25 ◴[] No.44387235[source]
Qwen is great but for creative writing I think Gemma is a good choice. It has better EQ than Qwen IMO.