←back to thread

237 points jdkee | 1 comments | | HN request time: 0s | source
Show context
nestorD ◴[] No.45949335[source]
So far I have seen two genuinely good arguments for the use of MCPs:

* They can encapsulate (API) credentials, keeping those out of reach of the model,

* Contrary to APIs, they can change their interface whenever they want and with little consequences.

replies(6): >>45949460 #>>45949551 #>>45949927 #>>45949984 #>>45950485 #>>45957820 #
the_mitsuhiko ◴[] No.45949551[source]
> * Contrary to APIs, they can change their interface whenever they want and with little consequences.

I already made this argument before, but that's not entirely right. I understand that this is how everybody is doing it right now, but that in itself cause issues for more advanced harnesses. I have one that exposes MCP tools as function calls in code, and it encourages the agent to materialize composed MCP calls into scripts on the file system.

If the MCP server decides to change the tools, those scripts break. That is is also similar issue for stuff like Vercel is advocating for [1].

[1]: https://vercel.com/blog/generate-static-ai-sdk-tools-from-mc...

replies(1): >>45954874 #
1. lsaferite ◴[] No.45954874[source]
Wouldn't the answer to this be to have the agent generate a new materialized workflow though? You already presumably have automated the agent's ability to create these workflows based off some prompting and a set of MCP Servers.