Llm is a cool cli tool, but IMO litellm is a better Python library.
Llm is a cool cli tool, but IMO litellm is a better Python library.
The problem with LiteLLM's approach is that every model provider needs to be added to the core library - in https://github.com/BerriAI/litellm/tree/main/litellm/llms - and then shipped as a new release.
LLM uses plugins because then there's no need to sync new providers with the core tool. When a new Gemini feature comes out I ship a new release of https://github.com/simonw/llm-gemini - no need for a release of core.
I can wake up one morning and LLM grew support for a bunch of new models overnight because someone else released a plugin.
I'm not saying "LLM is better than LiteLLM" here - LiteLLM is a great library with a whole lot more contributors than LLM, and it's also been fully focused on being a great Python library - LLM has also had more effort invested in the CLI aspect than the Python library aspect so far.
I am confident that a plugin system is a better way to solve this problem generally though.