1. Some LLMs support function calling. That means they are given a list of tools with descriptions of those tools.
2. Rather than answering your question in one go, the LLM can say it wants to call a function.
3. Your client (developer tool etc) will call that function and pass the results to the LLM.
4. The LLM will continue and either complete the conversation or call more tools (functions)
5. MCP is gaining traction as a standard way of adding tools/functions to LLMs.
GitMCP
I haven't looked too deeply but I can guess.
1. Will have a bunch of API endpoints that the LLM can call to look at your code. probably stuff like, get_file, get_folder etc.
2. When you ask the LLM for example "Tell me how to add observability to the code", the LLM can make calls to get the code and start to look at it.
3. The LLM can keep on making calls to GitMCP until it has enough context to answer the question.
Hope this helps.