←back to thread

MCP in LM Studio

(lmstudio.ai)
226 points yags | 1 comments | | HN request time: 0.275s | source
Show context
api ◴[] No.44380675[source]
I wish LM Studio had a pure daemon mode. It's better than ollama in a lot of ways but I'd rather be able to use BoltAI as the UI, as well as use it from Zed and VSCode and aider.

What I like about ollama is that it provides a self-hosted AI provider that can be used by a variety of things. LM Studio has that too, but you have to have the whole big chonky Electron UI running. Its UI is powerful but a lot less nice than e.g. BoltAI for casual use.

replies(2): >>44380769 #>>44382302 #
SparkyMcUnicorn ◴[] No.44380769[source]
There's a "headless" checkbox in settings->developer
replies(1): >>44382096 #
diggan ◴[] No.44382096[source]
Still, you need to install and run the AppImage at least once to enable the "lms" cli which can later be used. Would be nice with a completely GUI-less installation/use method too.
replies(1): >>44383010 #
t1amat ◴[] No.44383010[source]
The UI is the product. If you just want the engine, use mlx-omni-server (for MLX) or llama-swap (for GGUF) and huggingface-cli (for model downloads).
replies(1): >>44386253 #
1. diggan ◴[] No.44386253[source]
Those don't offer the same features as LM Studio itself does, even when you don't consider the UI. If there was a "LM Engine" CLI I could install, then yeah, but there isn't, hence the need to run the UI once to get "the engine".