Sovereign AI: Control, Choice, and Why It Goes Beyond Geopolitics

Sovereign AI shows up across nations, companies, communities, and individuals. This piece, based on a conversation with John Dickerson, CEO at Mozilla.ai, looks at control over AI systems, avoiding single points of failure, and building with modular, swappable components.

Sovereign AI: Control, Choice, and Why It Goes Beyond Geopolitics
On Your Terms

On Your Terms is a series of conversations with the builders at Mozilla.ai, going deep on the ideas, trade-offs, and beliefs behind open and trustworthy AI.

John Dickerson is the CEO of Mozilla.ai. This post is based on a conversation about sovereign AI, open source, and the future of AI infrastructure.


Anthropic recently released a model described as incredibly powerful. Only eleven companies can currently access it. If your entire system is built around a single API call to a single frontier lab, you're one policy decision away from a serious problem. That’s the sovereign AI conversation most people are having. John Dickerson thinks that’s only scratching the surface of the sovereign AI problem. 

As CEO of Mozilla.ai, John has  attended the India AI Impact Summit, spoken at global tech forums, and thought deeply about where AI power is heading and who gets to hold it. His view is that most of the public conversation around sovereignty is too narrow. It should cover  control at every level, from nations down to individuals. And that the people who need to care about it most are probably not paying attention yet.

It's Bigger Than Geopolitics

When most people talk about sovereign AI today, they're talking about nation-state AI independence. 

The story goes something like this: the world is splitting into three tech blocks: US-based tech, China-based tech, and a growing coalition of "middle powers". Countries that don't want to depend on either, pooling their resources to build a third alternative.

You're seeing world leaders like Canada's prime minister Mark Carney, voices from France, and the UK step up around this idea. They have slightly different views, but they're all singing the same tune: there's a real risk to being deeply tied to any single country's tech stack.

John acknowledges this conversation. But he doesn't love it as a starting point.

"I personally don't like to focus on that definition of sovereign because I think it is overly politically charged, and it's a little bit too tight," he says.

His preferred framework is broader. Sovereignty, the way he sees it, operates at four levels.

Nation-state. The geopolitical framing above. Real, but limited.

Enterprise and corporate. A company - US-based, international, European-only, Chinese, wherever - that wants to own its AI processes, audit its models, and not be at the mercy of a vendor's roadmap. This was the dominant conversation among corporate leaders at both Davos and the India AI Impact Summit.

Community. Cities, states, religious organizations, and hobby groups. The ability for a community to control the AI it has access to, and to not be manipulated through the information it interacts with.

Individuals. Your personal agency over how you access information, social networks, and commerce.  Think old school Internet cyberpunk libertarianism, in the best way.

"It all comes down to control, agency, resilience," John says. "That's not just at the highest levels in geopolitics."

The Internet Already Taught Us This Lesson

To understand where we are with AI, John reaches back to something older: the original Internet.

ARPANET, the precursor to the web we use today, was built by the US military. When its creators ranked what they cared about most in that early network, decentralization and robustness came out near the top. The logic was practical. If a camp or node went dark, the network still needed to pass information through other nodes.

Security, interestingly, was not a top priority. Because the network was built for trusted military allies, early protocols were completely unencrypted and easily spoofed. That assumption baked fragility into the Internet for decades. HTTPS, Let's Encrypt, DNS security improvements — these were all patches applied long after the fact.

But the core design principle, decentralized control, gave the Internet something powerful. Anyone could run a node. Anyone could own a piece of it.

Over time, the Internet centralized. A small number of platforms, cloud providers, and infrastructure companies now hold enormous power over how information flows.

"All this discussion around sovereignty really sounds a lot like that initial Internet," John says. 

"It's all about control. It's all about robustness. It's all about the resiliency of the software and AI supply chain."

The question is whether we make the same mistakes.

What Does Owning Your AI Stack Actually Mean?

An AI stack can extend all the way down to power generation and chip design. Data centers are measured in gigawatts. Companies like TSMC and NVIDIA sit at the foundation of the entire AI industry. Most companies are not going to compete at that level, and they shouldn't try.

What most companies can own is the software stack above that hardware layer. John draws a useful comparison to the LAMP stack, the combination of Linux, Apache, MySQL, and PHP that quietly powered the rise of the modern Internet. These were open source tools that competed against closed, proprietary alternatives and won.

"They are battle tested, they are free, they have a good community around them, they move very quickly," John says. "They run the modern Internet and they've run the Internet for a long time."

In the AI world, you can map that same approach onto a modern stack. Linux and Apache still have a role. But now you need to add more layers: data collection, potentially a fine-tuning or model training (although this may not be necessary for the model consumer), inference, agentic-interaction and tool-use, the agentic application itself, and an evaluation layer to sit on top of the application and environment.

Above all of that sits your application.

It's not so different from a traditional software stack. You've just inserted a probabilistic system called an AI model into the middle of it.

The practical recommendation: use those open source components where you can, and build in fallbacks. At minimum, you should have the ability to fall back to an on-prem or open-weight model, even if you're not running it as your default. You should also be able to switch between cloud providers rather than being locked into one.

"They can turn off access to things. And they do," John says. "You should not rely on that single point of failure."

This is why Mozilla.ai built any-llm, a unified interface across LLM providers. One config change swaps the provider underneath. Your application code stays the same. It's the kind of fallback John is describing: you're running Anthropic's latest today, but if access gets pulled or pricing changes overnight, you're one line of config away from routing to an open-weight alternative.

The Case for a Choice-First Stack

The multi-cloud argument has existed in infrastructure for years. You don't want everything running on one provider because those providers go down, change pricing, and shift their offerings. The same logic now applies to AI, and at every layer of the stack.

"Now you have so many models coming out that if you want to get the best performance out of whatever system you're using, you need to be able to switch between different model providers, between different tools, between different guardrails," John says.

There are two practical reasons to build this way.

First, models change behind the scenes when you're making API calls. If there's a performance drop, you need to be able to move quickly. Second, when a new model comes out, you want to be able to A/B test it against what you're currently using without rebuilding your system from scratch.

This is exactly the problem Mozilla.ai built the Choice-first Stack to solve: a unified, open-source set of tools designed so you can build and swap every layer of your AI system without rewriting your entire codebase. At the model layer, any-llm provides a unified interface across LLM providers:  one config change swaps what's running underneath, no application code rewritten.

But that logic extends beyond the model layer. If you're building agents, you're probably choosing between frameworks: CrewAI, LangGraph, AG2, others. Committing to one means rewriting if it falls behind. any-agent gives you a single interface across frameworks, so the switch is a config change rather than a rebuild.

The same applies to safety. Guardrail models vary wildly in what they catch and what they miss depending on your use case. any-guardrail lets you benchmark multiple guardrail providers against each other and swap them without touching your application logic.

And if you're working with MCP servers,  tool connections that let agents interact with external services, mcpd handles the management layer. One config file, one binary, consistent between your dev laptop and production.

What About Smaller Teams and Communities?

Here's a fair challenge: sovereign AI sounds great for large companies and wealthy nations. 

What about everyone else? Does this just become another form of exclusion?

John takes this seriously.

"It's a real worry. This is yet another pitch for being as open as possible about things."

The answer lies in decentralization and coalition building. The concept of internet-scale compute for AI already has proof of concept. Projects like SETI@home and Folding@home demonstrated years ago that you could pool distributed compute across thousands of machines for serious scientific work.

In March 2026, a company called Covenant trained a 72 billion parameter model in a fully decentralized fashion. The model itself isn't state of the art, but it proved the concept at a scale that was previously thought impossible outside of a major lab.

"Decentralization goes a long way when it comes to combating centralized power," John says. "And so does coalition building."

But you don't have to wait for decentralized training to mature. The tools for running AI locally already exist.

llamafile lets you run LLMs locally as a single, dependency-free binary executable. You can hand it to anyone and they can run it instantly, no setup, no installation chain, no technical overhead. It works on-prem and was built with ease of use as the first priority.

encoderfile follows the same philosophy but for encoder-only models. These are the models behind classification tasks, embeddings, and many guardrail systems. If you want to run that kind of workload locally and privately, encoderfile is the practical starting point.

Both tools reflect the same core idea: owning your AI stack should not require a dedicated infrastructure team. 

Your Data and the Tools You Use Every Day

Most people using Claude, ChatGPT, or Gemini right now have no real sense of what those systems are learning about them.

John's suggestion: find out.

Ask the AI tools you use regularly what they know about you. See what kind of profile has been built. You may be surprised by how much a general-purpose chat application learns from casual daily use.

"I'm not saying this to fearmonger," John is quick to add. "But it should be eye-opening."

That awareness is a starting point. From there, options exist. Trusted execution environments offer more private inference at higher cost. Private cloud compute adds some protection. On-prem models keep everything inside your own walls.

If something truly needs to stay unseen, it has to stay within your own environment. That includes the model itself.

John's personal concern about this goes beyond the enterprise level. Using a search-integrated AI tool the way most people use a general chat assistant means combining your search history, your questions, your browsing behavior, and your personal context into a single system.

"The level of detail those systems will learn about who I am as a person is frightening," he says.

Does Geography Still Matter?

The short answer: yes.

Open source helps. A world without open source AI would be far less equitable than the one we're in now. But open source alone doesn't solve everything.

Running a large frontier model still requires expertise, energy, hardware, chips, and data center capacity. Open-sourcing a model doesn't mean anyone can just run it on a laptop.

Geography also creates hard walls. Certain cloud providers are inaccessible across borders. It's a daily operational reality for anyone working internationally.

Open protocols help level the field. Access to infrastructure determines who can actually play.

What You Should Do Right Now

John returns to the Internet analogy for his vision of the future.

The Internet is shockingly robust. It runs on heterogeneous hardware and heterogeneous software that almost anyone can stand up. It has open protocols that can be extended, constrained, or built upon depending on your context.

A healthy AI future looks similar. Open protocols in the AI space. The ability to recover when something fails. The ability to cut something off when you no longer want to be involved. And, at the most basic level, the ability to just not use AI at all if that's your choice.

You don't need to be a nation-state or a large enterprise to start building towards that future. Think in layers. Sovereignty is a set of choices at the infrastructure layer, the model layer, the application layer, and the individual habit layer. And wherever you can, build the ability to swap your models, your guardrails, your retrieval system, from the start. 

Sovereignty in AI is a design principle before it's a policy debate. The open Internet showed us this is a solvable problem. The less good news is that it took decades to retrofit the security and decentralization the Internet needed after the fact.

"Choice goes a long way toward a healthy world," John says.