* They can encapsulate (API) credentials, keeping those out of reach of the model,
* Contrary to APIs, they can change their interface whenever they want and with little consequences.
* They can encapsulate (API) credentials, keeping those out of reach of the model,
* Contrary to APIs, they can change their interface whenever they want and with little consequences.
> They can encapsulate (API) credentials, keeping those out of reach of the model
An alternative to MCP, which would still provide this: code (as suggested in https://www.anthropic.com/engineering/code-execution-with-mc... and https://blog.cloudflare.com/code-mode/).
Put the creds in a file, or secret manager of some sort, and let the LLM write code to read and use the creds. The downside is that you'd need to review the code to make sure that it isn't printing (or otherwise moving) the credentials, but then again you should probably be reviewing what the LLM is doing anyway.
* Contrary to APIs, they can change their interface whenever they want and with little consequences.
The upside is as stated, but the downside is that you're always polluting the context window with MCP tool descriptions.