Hardening Your LLM Dependency Supply Chain
When source code and distributed packages don’t match, risks increase. This breakdown of the LiteLLM incident shares what to watch for and how to reduce exposure.
On March 24, 2026, LiteLLM, a Python package, with over 95 million monthly downloads, was compromised. Versions 1.82.7 and 1.82.8 on PyPI contained a credential-stealing payload that exfiltrated SSH keys, cloud provider credentials, Kubernetes secrets, API keys, crypto wallets, and database passwords to an attacker-controlled server.
The attacker who hit LiteLLM just compromised one package and got the keys to everything. They targeted the one dependency that, by definition, sits on every LLM credential in the organization. The source code on GitHub was clean the entire time. If you only audited the repo, you'd have seen nothing.
LLM gateway libraries are uniquely high-value targets. By design, they hold API keys for all the providers you use: OpenAI, Anthropic, Google, Azure, Cohere, and others.
What happened
A threat actor group known as TeamPCP gained access to the LiteLLM maintainer's PyPI publishing credentials. Using those credentials, they uploaded malicious versions of the package directly to PyPI, completely bypassing the GitHub repository.
The payload used a .pth file: a little-known Python mechanism that auto-executes code on interpreter startup. You don’t need to import litellm for it to run. Just having the package installed is enough for the malware to harvest credentials, establish persistence via systemd, and attempt lateral movement through Kubernetes clusters.
As Andrej Karpathy noted, the compromised version was live for less than an hour and was only discovered because a bug in the malware caused a machine to crash. Without that bug, this could have gone undetected for days or weeks.
The critical detail: this was a divergence between the source repository and the distributed artifact. The GitHub source was clean. The PyPI package was not. Anyone who reviewed the code on GitHub and assumed the published package matched it was wrong.
Five things you can do today
Here are a few things you can do right now. Some of these are band-aids: they address this specific exploit but don't scale across hundreds of dependencies. Trusted publishers (item 3) is the exception: it eliminates the attack vector entirely.
1. Pin exact versions and verify hashes
Stop using loose version specifiers for infrastructure dependencies. Pin to exact versions and use hash verification:
pip install --require-hashes -r requirements.txtYour requirements.txt should look like:
litellm==1.82.6 --hash=sha256:<known-good-hash>You can grab the hash for any package version directly from PyPI at https://pypi.org/project/<package>/<version>/#files — click 'view details' next to the wheel file.
2. Audit .pth files in your environments
Most developers don’t realize .pth files can execute code every time the Python interpreter starts. While intended only for adding paths, they are often abused to run arbitrary scripts.
Run this command to find any .pth files in your Python site-packages directory that contain import or exec statements:
find $(python -c "import site; print(site.getsitepackages()[0])") -name "*.pth" -exec grep -El "import|exec" {} \;What to look for: Any file that contains more than a simple directory path is a potential security or performance risk.
3. Use PyPI trusted publishers for your own packages
If you maintain a Python package, stop using stored API tokens or passwords to publish to PyPI. Use trusted publishers instead. This is an OIDC-based mechanism that ties your PyPI releases to a specific GitHub Actions workflow.
4. Compare distributed artifacts against source
Don't assume the package on PyPI matches the code on GitHub. For critical infra dependencies, compare them:
pip download <package>==<version> --no-deps -d /tmp/check
# Unzip the wheel and diff against the tagged source
5. Run a private package mirror with an allowlist
For production deployments, pull packages through a private mirror or proxy (like devpi or Artifactory) that only serves vetted versions so you can block compromised versions before they reach your infrastructure.
How we do it at Mozilla.ai
At any-llm, releases are published to PyPI exclusively through GitHub Actions using PyPI trusted publishers. None of our maintainers holds a PyPI API token. The only path to PyPI is through our CI workflow, which uses OIDC-based authentication, meaning a compromised developer account cannot be used to publish a malicious package.
Migration is easy
If you are currently looking to move off LiteLLM, we’ve made the transition simple. any-llm is a drop-in replacement for OpenAI-compatible proxies.
Check out our 2-step Migration Guide here.
Your LLM gateway is your blast radius. Treat it with the same rigor you’d treat your database or your secrets manager—because, in 2026, that’s exactly what it is.
Mozilla.ai is a public benefit startup and wholly-owned subsidiary of the Mozilla Foundation, operating with its own independent team. Our work focuses on AI technologies built around agency, access, and transparency. We share the Mozilla name and values, but we're a separate organization from Firefox, Thunderbird, and other Mozilla products.
Curious about the Mozilla family? Mozilla Foundation · Mozilla Corporation · Mozilla Ventures · Mozilla Data Collective · Firefox · Thunderbird