←back to thread

MCP in LM Studio

(lmstudio.ai)
227 points yags | 6 comments | | HN request time: 0.291s | source | bottom
Show context
chisleu ◴[] No.44380098[source]
Just ordered a $12k mac studio w/ 512GB of integrated RAM.

Can't wait for it to arrive and crank up LM Studio. It's literally the first install. I'm going to download it with safari.

LM Studio is newish, and it's not a perfect interface yet, but it's fantastic at what it does which is bring local LLMs to the masses w/o them having to know much.

There is another project that people should be aware of: https://github.com/exo-explore/exo

Exo is this radically cool tool that automatically clusters all hosts on your network running Exo and uses their combined GPUs for increased throughput.

Like HPC environments, you are going to need ultra fast interconnects, but it's just IP based.

replies(14): >>44380196 #>>44380217 #>>44380386 #>>44380596 #>>44380626 #>>44380956 #>>44381072 #>>44381075 #>>44381174 #>>44381177 #>>44381267 #>>44385069 #>>44386056 #>>44387384 #
1. prettyblocks ◴[] No.44380956[source]
I've been using openwebui and am pretty happy with it. Why do you like lm studio more?
replies(3): >>44381042 #>>44381073 #>>44381909 #
2. truemotive ◴[] No.44381042[source]
Open WebUI can leverage the built in web server in LM Studio, just FYI in case you thought it was primarily a chat interface.
3. prophesi ◴[] No.44381073[source]
Not OP, but with LM Studio I get a chat interface out-of-the-box for local models, while with openwebui I'd need to configure it to point to an OpenAI API-compatible server (like LM Studio). It can also help determine which models will work well with your hardware.

LM Studio isn't FOSS though.

I did enjoy hooking up OpenWebUI to Firefox's experimental AI Chatbot. (browser.ml.chat.hideLocalhost to false, browser.ml.chat.provider to localhost:${openwebui-port})

4. s1mplicissimus ◴[] No.44381909[source]
i recently tried openwebui but it was so painful to get it to run with local model. that "first run experience" of lm studio is pretty fire in comparison. can't really talk about actually working with it though, still waiting for the 8GB download
replies(1): >>44382953 #
5. prettyblocks ◴[] No.44382953[source]
Interesting. I run my local llms through ollama and it's zero trouble to get that working in openwebui as long as the ollama server is running.
replies(1): >>44386320 #
6. diggan ◴[] No.44386320{3}[source]
I think that's the thing. Compared to LM Studio, just running Ollama (fiddling around with terminals) is more complicated than the full E2E of chatting with LM Studio.

Of course, for folks used to terminals, daemons and so on it makes sense from the get go, but for others it seemingly doesn't, and it doesn't help that Ollama refuses to communicate what people should understand before trying to use it.