This is very true. But why stop there?
Imagine a future where we have an evolved version of MCP -- call it MCP++.
In MCP++, instead of having to implement a finite list of specialized variants like CreateUserAndAddToGroup, imagine MCP++ has a way to to feed the desired logic (create user, then add that user to $GROUP) directly to the endpoint. So there would be something like a POST /exec endpoint. And then the /exec endpoint can run the code (maybe it's WASM for something)...
Wait a minute! We already have this. It's called programming.
You could have the LLM write code, so that any pipelining (like your example), aggregation, filtering, or other transformation happens in that code, and the LLM only needs to spend the output tokens to write the code, and the only input tokens consumed is the final result.
I definitely am not the first person to suggest this:
https://www.anthropic.com/engineering/code-execution-with-mc...
https://blog.cloudflare.com/code-mode/
... but I can say that, as soon as I read about MCP, my first thought was "why?"
MCP is wasteful.
If you want LLMs to interact with your software/service, write a library, let the scrapers scrape that code so that future LLM revisions have the library "baked into it" (so you no longer need to spam the context with MCP tool descriptions), and let the LLM write code, which it already "knows" how to do.
What if your library is too new, or has a revision, though?
That's already a solved problem -- you do what you'd do in any other case where you want the LLM to write code for you: point it at the docs / codebase.