any-llm in the Wild: Three Integrations as We Grow Our Ecosystem
any-llm now integrates with JupyterLiteAI, LangChain, and Headroom. A single provider-agnostic layer powering notebooks, agents, and context optimization across OpenAI, Anthropic, Mistral, and local models.
A core part of building any-llm is making sure it is present where developers already are. Over the past few months, we've integrated any-llm into the broader ecosystem by contributing to open source projects, publishing new packages, and collaborating with the community. Here are three recent integrations that each demonstrate a different aspect of what a provider-agnostic LLM layer makes possible.
JupyterLiteAI + any-llm-gateway
JupyterLite AI gives data scientists access to AI code completions and chat. It already supports a range of providers, but each requires its own configuration and API keys.
Because the any–llm-gateway exposes an OpenAI-compatible API, JupyterLite AI connects to it without any code changes. Users point their notebook at the gateway endpoint and get access to every provider any-llm supports — OpenAI, Anthropic, Mistral, local models — through a single configuration.
The Local Edge: You can now point JupyterAI at a local Llama 3 or Mistral instance running via llamafile or Ollama. Your code, your data, and your prompts will never leave your local machine.
langchain-anyllm
LangChain is the industry standard for building AI agents, but swapping between providers often requires installing and configuring disparate packages.
We built langchain-anyllm to collapse that complexity into a single integration. Install one package and switch models with a string:
from langchain_anyllm import ChatAnyLLM
llm = ChatAnyLLM(model="openai:gpt-4")
llm = ChatAnyLLM(model="anthropic:claude-sonnet-4-20250514")
llm = ChatAnyLLM(model="mistral:mistral-small-latest")
The package supports streaming (sync and async), tool calling, and JSON mode. It's available on PyPI and now documented in LangChain's official docs, merged after review by the LangChain team.
Headroom + any-llm as a backend
Headroom is a context optimization layer that compresses LLM context by 50-90% using statistical analysis, thereby reducing token costs without sacrificing accuracy. It operates as a proxy and previously supported backends like AWS Bedrock, Vertex AI, Azure OpenAI, and OpenRouter.
We contributed any-llm as a new backend, giving Headroom users access to any supported provider:
headroom proxy --backend anyllm --anyllm-provider openai
The integration supports streaming, non-streaming, and OpenAI-format requests. The composability here is the interesting part: Headroom handles context optimization, any-llm handles provider abstraction, and the developer gets both without coupling to a specific vendor. For on-prem users, this allows running larger models with more context on more modest local hardware.
Join the Ecosystem
Come build the future of open source AI with us.
- Explore the code: Check out the any-llm repository to see how we're abstracting the provider layer.
- Try the integrations: Grab langchain-anyllm on PyPI or spin up the any-llm-gateway to use with Jupyter.
- Build with us: Have a tool you want to see integrated? Open an issue or a PR—we’re meeting developers where they are, and we’d love to meet you there, too.