On February 28, 2026, Pete Hegseth, Trump’s newly appointed Secretary of Defense, signed an unprecedented order in the history of US public procurement: Anthropic and its AI Claude are declared a “supply chain risk” to the Pentagon. This classification is usually reserved for Huawei or entities linked to the Kremlin. This time, it targets a California startup whose only wrongdoing is having said no.
This clash brings to light years of simmering tension: how far can a private company resist the federal government in the name of ethics? And at what cost?
Timeline: From Contract to Blacklist
Until this decision, Claude was the only AI model deployed on the Pentagon’s classified networks. It wasn’t just a lab prototype: it was an operational tool, used in sensitive missions including the capture of Nicolás Maduro. The initial contract was worth $200 million, deployed via AWS on the Joint Warfighting Cloud Capability infrastructure, with Amazon’s Trainium chips as the technical substrate.
Negotiations began to deteriorate in January 2026. Emil Michael, Under Secretary of Defense, pressed with specific demands: removal of filters preventing Claude from assisting in mass domestic surveillance (geolocation, financial data analysis via brokers, browsing history) and disabling safeguards against the development of lethal autonomous weapons.
Anthropic refused. Dario Amodei presented this position as non-negotiable. Michael responded by describing the CEO as suffering from a “God complex.” Trump posted an immediate stop order for the use of Claude by all federal agencies on Truth Social. The formal designation came at the end of February.
The contract was terminated with a six-month phase-out. DoD teams have until August 2026 to migrate to other solutions. Officials privately describe it as an “enormous operational headache.”
What the Pentagon Really Wanted

The DoD’s requests deserve to be read unfiltered. The Pentagon wanted access to three specific capabilities that Claude systematically refuses by design:
- Surveillance of American citizens without a court order, via aggregation of commercial data
- Assistance in building autonomous weapon systems capable of selecting targets without human intervention
- Intelligence gathering operations circumventing existing legal safeguards (Foreign Intelligence Surveillance Act)
These restrictions are hardcoded into the Anthropic Constitution, the document which defines Claude’s values and boundaries.
These are not adjustable settings in an admin interface: they are part of the model’s value architecture. Removing them would require retraining Claude from an advanced phase, with unpredictable implications for its other behaviors.
Emil Michael knew this. The DoD’s “best and final” offer wasn’t meant to be negotiated: it was an ultimatum, designed to force a public capitulation or create a legal precedent.
The OpenAI Agreement: What Press Releases Don’t Say
OpenAI signed an agreement with the Pentagon. The exact terms remain classified, but the general outline is known: the deal permits expanded military use of GPT-4 and following models, with lighter safeguards on defense applications.
Microsoft, OpenAI’s strategic partner, facilitates this integration via Azure Government Secret.
This decision has a documented human cost. Between 2023 and 2024, several high-profile figures left OpenAI, citing concerns about ethical alignment: Ilya Sutskever, co-founder and former Chief Scientist, and Jan Leike, head of the Alignment team, both pointed to a shift toward prioritizing commercial and defense contracts over safety research.
Leike was especially direct in his resignation letter, describing a culture where “the culture of safety and processes” had been sidelined in favor of product growth.
These departures are not anecdotal: they signal that the debate Anthropic has taken public with the Pentagon also unfolded internally, behind closed doors, at its main competitor.
The difference between OpenAI and Anthropic is not strictly ideological: it’s a difference of governance.
Anthropic has legally enshrined its ethical principles, making them difficult to circumvent even under economic pressure.
Financial Impact: Over $1 Billion at Stake

The direct $200 million contract is just the tip of the iceberg. The real impact exceeds $1 billion when you add in cascading effects.
Palantir is the first to feel the shock. Peter Thiel’s company had developed operational analytics modules for military clients, relying directly on Claude as its reasoning engine.
The forced migration to other models requires costly reconfigurations and qualified validation on classified networks, estimated by DoD technical teams to take 3–6 months.
Beyond Palantir, defense contractors who had integrated Claude into their weapons systems development pipelines (Lockheed Martin, Raytheon, Booz Allen Hamilton) must now requalify their architectures.
Each requalification on a classified network costs between $2 million and $15 million, depending on system complexity.
The FDA was using Claude to accelerate the review of pharmacological safety files.
This illustrates a paradox: civil agencies are losing a public safety tool due to a conflict between DoD and Anthropic.
The fintech sector had also been preemptively banned from using Claude for certain sensitive financial applications, for fear of regulatory spillover.
For Anthropic, the loss of federal revenue poses a real pressure for a company that, despite its sky-high valuations ($18 billion in 2024), is still in a growth phase with massive infrastructure costs.
Anthropic’s pivot toward finance and legal sectors with Claude Opus now takes on new meaning: diversify revenue streams away from US federal dependence.
The App Store Paradox
Here’s a detail rarely mentioned: Claude remains freely available for download on Apple’s App Store and Google Play.
Any Pentagon employee can install it on their personal phone within thirty seconds.
The blacklist bans institutional and contractual use of Claude, not access to the model. It targets official integrations, not individual usage.
This paradox reveals the real nature of the decision: it’s not a security measure in the technical sense, it’s a commercial sanction disguised as a risk designation.
The parallel to telecom operators forced to create backdoors for intelligence services is illuminating. When the FBI asked Apple to create a modified version of iOS to access the San Bernardino shooter’s iPhone in 2016, Apple refused.
The government did not brand Apple a “supply chain risk.”
The Anthropic precedent is therefore qualitatively different: it marks the use of an industrial policy tool (supply chain designation) as a means of coercion on a US private company.
Legal Arguments: Uncharted Territory
The Defense Production Act is at the center of this legal standoff. The DoD invoked this law to designate Claude as “essential to national defense”, which in theory could force Anthropic to provide access.
Anthropic counters that you cannot simultaneously designate a company as a “supply chain risk” (status reserved for adversaries) and as an essential asset to compel—the two designations are legally contradictory.
The National Defense Authorization Act gives Hegseth broad authority for these classifications, but there is no precedent for applying this to an ethical American AI.
Possible constitutional arguments revolve around freedom of contract and the First Amendment: can an AI whose “values” are an editorial product be forced to change its “speech”?
No court case had been officially filed at the time of writing this article.
But the legal grounds for a challenge do exist, and the timing of any decision (estimated end of 2026) would coincide with the US midterms, making the affair politically explosive.
Geopolitical Implications: The Real Calculus

China develops military AIs with none of the restrictions being debated here.
The Chinese model: direct PLA–tech industry integration, no codified ethical safeguards, mass deployment on surveillance systems (Xinjiang serving as a large-scale laboratory).
If the US is divided between ethical AI and militarized AI, Beijing doesn’t have this problem.
This is the Pentagon’s core argument, and it deserves to be taken seriously. A Claude that refuses to identify a target in the middle of an operation creates a real operational risk.
An AI system with unpredictable guardrails is, from a military perspective, an unreliable system.
The EU AI Act of 2024 — which imposes compliance requirements for businesses before August 2026 — explicitly bans autonomous weapon systems and surveillance without judicial warrants, aligning Europe’s stance with Anthropic’s.
Macron’s France is pushing for a military exemption in the Act, though it would be supervised by an independent watchdog.
This intermediate stance—which recognizes military necessity without abandoning all oversight—is something the Pentagon categorically refuses.
History may remember this moment as the AI equivalent of the civil/military nuclear split of the 1950s: the point where AI diverged into two irreconcilable governance models.
The democratic governance of AI advocated by researchers and institutions is exactly what this conflict puts to the test: can an ethical framework be maintained when national security is invoked?
Possible Scenarios by 2027
Scenario 1: Anthropic holds firm and wins. A legal challenge succeeds, the supply chain designation is overturned, and Anthropic returns to federal markets with its safeguards intact.
This scenario strengthens ethics as a commercial differentiator, attracts European investors, and institutional clients sensitive to GDPR compliance.
The US military market is lost, but the international civil market is consolidated.
Scenario 2: Anthropic partially yields. A negotiated agreement creates a “Claude Defense” version with lighter guardrails for specific military use cases, supervised by an independent committee.
This compromise resembles what Palantir has done for years: selling to governments while maintaining an ethical facade.
The risk: diluted value proposition and loss of trust among non-governmental users.
Scenario 3: The rift becomes permanent. The US operates with a military AI (OpenAI/Microsoft) and a civil AI (Anthropic/civilian Google). The EU adopts its own standards.
China continues unconstrained. The result is a world where three incompatible AI regimes coexist, making international cooperation on system safety difficult.
The regulatory framework of the AI Act and CNIL will be decisive in seeing whether Europe can forge a credible third path between the two American models.
This controversy will shape military AI regulation for the coming decade.
Ethical guardrails don’t disappear by fiat: either they are negotiated in a clear legal framework, or a parallel market is created where none exist. Is Anthropic right to hold this line?
FAQ
Why did the Pentagon label Anthropic a “supply chain risk”?
This designation, usually reserved for hostile foreign entities like Huawei, was applied to Anthropic after its refusal to lift Claude’s ethical guardrails. The DoD chose this legal tool because it allows for rapid administrative exclusion without going through a federal court.
What safeguards does Claude refuse to deactivate?
Claude refuses to assist in mass surveillance without court orders (aggregating geolocation, financial, and browsing data), in developing autonomous weapons selecting targets without human intervention, and in intelligence operations circumventing the Foreign Intelligence Surveillance Act.
Does OpenAI have the same restrictions as Claude?
OpenAI has signed an agreement with the Pentagon that allows for expanded military use. The exact terms are classified, but the resignations of people like Jan Leike and Ilya Sutskever in 2023–2024 indicate that internal guardrails were relaxed under commercial and government pressure.
What is the actual financial impact for Anthropic?
The lost direct contract represents $200 million. The total impact exceeds $1 billion when you include lost adjacent federal markets, defense contractors that had planned to use Claude, and civil agencies like the FDA that used Claude for public safety work.
How long will the DoD migration to other AIs take?
DoD technical teams estimate it will take between three and six months to migrate to alternatives, including revalidation phases on classified networks. The contractual phase-out runs through August 2026. Palantir, which depends on Claude for its military projects, is especially affected by these delays.
Can Pentagon employees still use Claude personally?
The blacklist bans institutional and contractual use of Claude but not personal access. The Claude app is still available on the App Store and Google Play. This paradox highlights that the designation is a commercial sanction, not a strict technical security measure.
Can the Defense Production Act force Anthropic to change Claude?
The DoD invoked it to designate Claude as a defense-essential asset. Anthropic challenges the legal contradiction: you cannot at the same time brand a company a “supply chain risk” (adversary status) and an essential asset to force. Courts have never ruled on this issue for an AI.
What is the European Union’s position regarding military AI?
The EU AI Act of 2024 bans autonomous weapons and surveillance without court orders, aligning Europe’s regulation with the principles defended by Anthropic. France is negotiating a military exemption, but with oversight by an independent authority—a stance the American Pentagon rejects.
What is the concrete risk to US national security if Anthropic maintains its guardrails?
The DoD argues that an AI system with unpredictable refusals in the midst of operations creates an operational risk. If Claude refuses to identify a target or analyze communications during a mission, it could cost lives or sabotage the operation. This is the real argument, apart from the domestic surveillance issue.
What historical precedents shed light on this conflict?
Apple’s refusal to create an iOS backdoor for the FBI in 2016 is the closest precedent, but without a supply chain designation. More broadly, the debate echoes the split between civil and military nuclear tech in the 1950s: the moment when a technology diverges between two governance regimes whose actors refuse convergence.
Related Articles
Reddit blocks AI scraping: what it means for LLMs and open source
On March 25, 2026, Reddit sent shockwaves through the AI community: the platform is shutting its doors to automated scrapers, requiring biometric verification for suspicious accounts, and removing 100,000 bot…
Claude Mythos: what the Capybara leak reveals about Anthropic’s next model
On March 26, 2026, two cybersecurity researchers stumbled across something Anthropic never meant to show: roughly 3,000 internal assets exposed publicly on the company’s blog, including draft posts revealing the…