←back to thread

MCP in LM Studio

(lmstudio.ai)
226 points yags | 4 comments | | HN request time: 0.823s | source
Show context
minimaxir ◴[] No.44380112[source]
LM Studio has quickly become the best way to run local LLMs on an Apple Silicon Mac: no offense to vllm/ollama and other terminal-based approaches, but LLMs have many levers for tweaking output and sometimes you need a UI to manage it. Now that LM Studio supports MLX models, it's one of the most efficient too.

I'm not bullish on MCP, but at the least this approach gives a good way to experiment with it for free.

replies(4): >>44380220 #>>44380533 #>>44380699 #>>44381188 #
1. zackify ◴[] No.44381188[source]
Ollama doesn’t even have a way to customize the context size per model and persist it. LM studio does :)
replies(1): >>44382206 #
2. Anaphylaxis ◴[] No.44382206[source]
This isn't true. You can `ollama run {model}`, `/set parameter num_ctx {ctx}` and then `/save`. Recommended to `/save {model}:{ctx}` to persist on model update
replies(2): >>44385978 #>>44386362 #
3. truemotive ◴[] No.44385978[source]
This can be done with custom Modelfiles as well, I was pretty bent when I found out that 2048 was the default context length.

https://ollama.readthedocs.io/en/modelfile/

4. zackify ◴[] No.44386362[source]
As of 2 weeks back if I did this, it would reset back the moment cline made an api call. But lm studio would work correctly. I’ll have to try again. Even confirmed cline was not overriding num context