Introducing any-llm: A unified API to access any LLM provider
When it comes to using LLMs, it’s not always a question of which model to use: it’s also a matter of choosing who provides the LLM and where it is deployed. Today, we announce the release of any-llm, a Python library that provides a simple unified interface to access the most popular providers.

When it comes to using Large Language Models (LLMs), it’s not always a question of which model to use: it’s also a matter of choosing who provides the LLM and where it is deployed. As we’ve written about previously, there are many options available for how to access an LLM. The provider you choose to use can have implications in terms of cost, latency, and security. Most AI labs offer their own provider platform (OpenAI, Google, Mistral, etc.), and other provider platforms (Azure, AWS, Cerebras, etc.) provide access to a wide variety of LLMs. But what if you want to build your LLM application without having to worry about being locked in to a certain provider?
Today, we’re happy to announce the release of our new Python library: any-llm! any-llm provides a simple unified interface to access the most popular providers. By changing only a single configuration parameter, you can easily switch between providers and models.
from any_llm import completion
import os
# Make sure you have the appropriate environment variable set
assert os.environ.get('MISTRAL_API_KEY')
# Basic completion
response = completion(
model="mistral/mistral-small-latest", # <provider_id>/<model_id>
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
any-llm fills a gap in the LLM provider interface landscape through several key design principles:
- Use of provider SDKs: any-llm leverages official provider SDKs when available, reducing the maintenance burden and ensuring compatibility.
- Committed to active maintenance: any-llm is integrated with any-agent, one of our most community-engaged projects, so we're motivated to maintain it.
- No proxy or gateway server required: no need to set up another service as a gateway: download the any-llm SDK, and you’re good to communicate with all the supported providers, without having to send your data through another third-party provider.
You can view the list of our supported providers here.
OpenAI API Standard
The OpenAI API has become the standard for LLM provider interfaces. Although some providers provide perfect compatibility with the OpenAI API standard, others (like Mistral, Anthropic, etc.) may slightly diverge from the OpenAI standard when it comes to expected input parameters and output values.
In order to make it easy to switch between these providers, this creates a need for lightweight wrappers that can gracefully handle these differences while maintaining as consistent an interface as possible.
any-llm solves this problem by normalizing outputs to return OpenAI ChatCompletion objects, regardless of which provider is used under the hood. The objects are returned as OpenAI Pydantic models, so you can access them just as you would if you were using the official OpenAI API SDK.
You can view the any-llm API documentation here. And to see how it relates to existing solutions, read on.
Shortcomings of Existing Solutions
Several excellent open-source Python libraries already exist to address the challenge of interacting with various LLM providers, but each comes with its own set of limitations.
One popular solution, LiteLLM, is highly valued for its wide support of different providers and modalities, making it a great choice for many developers. However, it re-implements provider interfaces rather than leveraging SDKs that are managed and released by the providers themselves. As a result, the approach can lead to compatibility issues and unexpected modifications in behavior, making it difficult to keep up with the changes happening among all the providers.
Another option, AISuite, was created by Andrew NG and offers a clean and modular design. However, it is not actively maintained (its last release was in December of 2024) and lacks consistent Python-typed interfaces.
Furthermore, many framework-specific solutions, such as those found in Agno, either depend on LiteLLM or implement their own provider integrations. These implementations may be robust, but are difficult to integrate into other software solutions since they’re tightly coupled with the software already built around them.
Lastly, proxy/gateway solutions like OpenRouter and Portkey require users to set up a hosted proxy server to act as an intermediary between their code and the LLM provider. Although this can effectively abstract away the complicated logic from the developer, it adds an extra layer of complexity and a dependency on external services, which might not be ideal for all use cases.