←back to thread

MCP in LM Studio

(lmstudio.ai)
225 points yags | 2 comments | | HN request time: 0.631s | source
Show context
minimaxir ◴[] No.44380112[source]
LM Studio has quickly become the best way to run local LLMs on an Apple Silicon Mac: no offense to vllm/ollama and other terminal-based approaches, but LLMs have many levers for tweaking output and sometimes you need a UI to manage it. Now that LM Studio supports MLX models, it's one of the most efficient too.

I'm not bullish on MCP, but at the least this approach gives a good way to experiment with it for free.

replies(4): >>44380220 #>>44380533 #>>44380699 #>>44381188 #
chisleu ◴[] No.44380699[source]
> I'm not bullish on MCP

You gotta help me out. What do you see holding it back?

replies(1): >>44381024 #
minimaxir ◴[] No.44381024[source]
tl;dr the current hype around it is a solution looking for a problem and at a high level, it's just a rebrand of the Tools paradigm.
replies(1): >>44381099 #
mhast ◴[] No.44381099[source]
It's "Tools as a service", so it's really trying to make tool calling easier to use.
replies(1): >>44382200 #
ijk ◴[] No.44382200[source]
Near as I can tell it's supposed to make calling other people's tools easier. But I don't want to spin up an entire server to invoke a calculator. So far it seems to make building my own local tools harder, unless there's some guidebook I'm missing.
replies(2): >>44382667 #>>44384088 #
1. xyc ◴[] No.44382667[source]
It's a protocol that doesn't dictate how you are calling the tool. You can use in-memory transport without needing to spin up a server. Your tool can just be a function, but with the flexibility of serving to other clients.
replies(1): >>44385236 #
2. ijk ◴[] No.44385236[source]
Are there any examples of that? All the documentation I saw seemed to be about building an MCP server, with very little about connecting an existing inference infrastructure to local functions.