Antropic, OpenAi and Department of War
One AI Company Refused Mass Surveillance and Got Blacklisted. The Other Signed Up. Neither Scenario Protects Your Data.
1 March 2026 · Hubert · 7 min read
Yesterday, the President of the United States ordered every federal agency to immediately stop using Anthropic’s AI technology. The Pentagon designated the company a “supply chain risk to national security” — a classification previously reserved for adversaries like Huawei and Kaspersky.
Anthropic’s offence? Refusing to let the US military use its AI models for mass domestic surveillance and fully autonomous weapons.
Hours later, OpenAI announced it had signed a deal to replace Anthropic on the Pentagon’s classified networks. Its CEO claimed he’d secured the exact same safeguards Anthropic was punished for requesting.
If you run a European business that relies on US-based AI tools, what happened this week should terrify you. Not because of what it says about one company — but because of what it reveals about the structural risk of building on US AI infrastructure.
What actually happened
The timeline matters.
Anthropic had a contract worth up to $200 million with the US Department of Defence — rebranded the “Department of War” by the current administration. It was the first and only AI company operating inside the Pentagon’s classified networks. By all accounts, the military valued the technology highly; one defence official privately admitted removing it would be a “huge pain.”
But Anthropic maintained two red lines in its contract: no mass surveillance of American citizens, and no fully autonomous weapons. The Pentagon wanted those restrictions removed. Their position was simple — once the military buys a tool, the military decides how to use it, for “all lawful purposes.”
When Anthropic refused to budge, the situation escalated rapidly. The Defence Secretary called the CEO a “liar” with a “God complex.” The Pentagon explored invoking the Korean War-era Defence Production Act to compel compliance. When Anthropic still refused, the President stepped in with a Truth Social post calling them “Leftwing nut jobs” and ordering a government-wide blacklist.
The supply chain risk designation is the most consequential part. It doesn’t just end Anthropic’s Pentagon contract — it forces every company that does business with the US military to certify they don’t use Anthropic’s products in their work. Eight of the ten largest US companies currently use Anthropic’s Claude. Defence contractors, cloud providers, banks, consulting firms — all now face a question: is using Claude worth the legal uncertainty?
Then came the detail that should concern every European data protection officer. Axios reported that the Pentagon’s proposed compromise — offered by phone at the exact moment the blacklist was being posted — would have required allowing the collection and analysis of Americans’ geolocation data, web browsing history, and personal financial information purchased from data brokers.
Read that again. The US government’s “compromise” position with an AI company was: let us use your models to process mass-collected personal data from commercial brokers.
And the company that said no got treated like a hostile foreign entity.
OpenAI’s deal is not what it appears
Within hours of Anthropic’s blacklisting, OpenAI announced it had reached an agreement with the Pentagon for classified network access. CEO Sam Altman framed it as a principled stand: same red lines, same safeguards, everyone should be happy.
The framing is elegant. The substance is thin.
Anthropic’s position was that existing US law doesn’t adequately address what AI can do — that AI can “supercharge the legal collection of publicly available data, from social media posts to geolocation,” and that the law hasn’t caught up. They wanted hard contractual restrictions that couldn’t be reinterpreted.
OpenAI’s position is that the Pentagon “agrees with these principles, reflects them in law and policy.” In other words: we trust that existing law is sufficient. We trust the military to follow it.
The Pentagon accepted OpenAI’s framing. It rejected Anthropic’s. Both companies technically had the same red lines. The difference is enforceability — and OpenAI chose the version the Pentagon was willing to sign.
The pattern is instructive: the US government will work with companies that defer to its interpretation of the rules. It will destroy companies that insist on their own.
Why this matters for European businesses
If you’re a DPO at a German law firm, an IT manager at a Dutch consultancy, or anyone responsible for data protection at a European SMB, the Anthropic situation is not an American political drama you can ignore. It’s a case study in the structural risk you carry every day.
Here is what this week proved:
US AI companies are subject to political leverage that no European customer can predict or influence. Anthropic went from trusted classified-network partner to national security threat in 72 hours. The company’s CEO was denounced by a government official on social media. The President personally intervened. The mechanism used — supply chain risk designation — had never been applied to an American company before. There is no precedent, no playbook, no appeal process that any European buyer could have anticipated or prepared for.
“Zero retention” promises from US providers exist within a political context you don’t control. When OpenAI or any US AI provider tells you “we don’t store your data,” that promise sits inside a jurisdiction where the government just demonstrated it can compel an AI company to change its terms under threat of commercial destruction. The CLOUD Act already allows US law enforcement to demand data stored anywhere in the world by US companies. This week showed the coercion goes further than legal orders — it extends to political and economic pressure that no contract clause can protect against.
Your AI provider’s stability is a geopolitical variable. Anthropic’s expected IPO — reportedly targeting a $380 billion valuation — is now on hold. Companies like Palantir, which built their most sensitive military work on Anthropic’s models, are scrambling for alternatives. If your European business built workflows on Claude, you just watched your vendor get designated a national security threat overnight. That’s not a vendor risk you can manage with a DPA.
The EU-US Data Privacy Framework looks more fragile than ever. The framework that allows personal data to flow from the EU to the US rests on the assumption that US law provides adequate protection. This week, the US government demonstrated it will punish a company for insisting on privacy safeguards — and reward a competitor for accepting weaker ones. Legal experts already gave the framework a limited lifespan before a potential “Schrems III” invalidation. Events like this accelerate the timeline.
The uncomfortable conclusion
Both outcomes of the Anthropic situation are bad for European data.
If your US AI provider resists government pressure — as Anthropic did — they face blacklisting, commercial destabilisation, and potential economic destruction. Your vendor becomes unreliable not because of their technology, but because of their principles.
If your US AI provider accommodates government pressure — as OpenAI did — your data is processed by a company that has explicitly chosen to trust the US government’s self-restraint over hard contractual limits. “Trust your military to do the right thing” is not a GDPR compliance strategy.
Either way, the data your team enters into a US-based AI tool is subject to a jurisdictional reality that European law cannot override. Your DPA with OpenAI doesn’t outrank a presidential directive. Your data processing agreement doesn’t survive a supply chain risk designation.
The only protection that works regardless of which way the political wind blows is architectural: if the data doesn’t exist on a server, no government can compel its disclosure. If the data never crosses the Atlantic, no executive order can reach it.
What European organisations should do now
This isn’t a theoretical exercise. If you’re responsible for compliance at a European organisation, three concrete actions apply immediately.
First, audit your AI exposure. Know which tools your employees use, what data enters them, and where it’s processed. If the answer to “where” includes any US jurisdiction, you carry the risk demonstrated this week.
Second, revisit your risk assessment for US-based AI providers. The standard DPIA for tools like ChatGPT or Claude assumed a stable vendor relationship. That assumption broke on Friday. Update your assessment to include political and geopolitical vendor risk.
Third, evaluate alternatives where the architecture eliminates the risk rather than managing it. “Zero retention” policies are promises. Browser-local processing is physics. The difference matters when governments start making demands.
I built Limbo Chat because I believe EU businesses shouldn’t have to choose between AI productivity and data protection. Conversations stay in the browser. Nothing hits our servers. No jurisdiction can compel what doesn’t exist. Get early access →