
It started as a business deal. It ended with a presidential order, national security threats, and Elon Musk tweeting that an AI company hates Western civilization. Welcome to Friday, February 27, 2026, one of the strangest days in the AI industry’s short but dramatic history.
Anthropic is the company behind the AI model Claude, one of the strongest competitors to ChatGPT. The company was founded in 2021 by Dario Amodei and his sister Daniela Amodei, who previously worked at OpenAI. One of Anthropics core principles since day one has been that AI safety is not a hindrance to business but a prerequisite for it.
In July 2025 Anthropic signed a contract worth 200 million dollars with the Pentagon. The company wanted guarantees that Claude would not be used for mass surveillance of American citizens or in fully autonomous weapons systems. That sounded reasonable. The Pentagon thought differently.
For months the parties negotiated in secret. The Pentagon demanded the right to use Anthropic’s AI models for all lawful purposes without the restrictions the company wanted in place. The negotiations were slow and eventually completely broke down.
On Thursday, February 26, Dario Amodei publicly stated that his company could not in good conscience agree to the Department of Defense’s demands. In response, Pentagon Deputy Secretary Emil Michael called Amodei a liar with a god complex and accused him of wanting to personally control the US military.
It is hard not to enjoy a bit of that diplomatic elegance there.
On Friday, February 27, Trump published a post on Truth Social announcing that he ordered every federal agency in the US to immediately stop all use of Anthropic’s technology. The Pentagon and other agencies already using the products were given six months to phase them out.
The headline “left-wing crazy idiots” about Anthropic also came from Trump which is an unusual way to end a business relationship worth hundreds of millions of dollars.
Soon after Trump’s post, Defense Secretary Pete Hegseth announced that the Pentagon is now designating Anthropic as a Supply-Chain Risk to National Security, a label normally reserved for foreign actors linked to hostile states. The effect is that no subcontractors or partners of the US military are allowed to conduct commercial business with Anthropic anymore.
Being equated with a company connected to Russia or China is, to say the least, an unusually serious status.
Anthropic responded with a statement saying that the company has not yet received direct communication from either the Pentagon or the White House, but intends to challenge the supply-chain designation in court. The company called the designation both legally untenable and a dangerous precedent for any American company negotiating with the government.
It is a bold move. Suing one’s own government is not how most companies choose to market themselves, but Anthropic appears to have decided that principles outweigh contracts.
Almost immediately after Trump’s announcement, OpenAI CEO Sam Altman announced that the company reached an agreement with the Department of Defense to deploy its models on the department’s classified networks. Altman clearly thought the timing was perfect for a press release.
The movement likely also benefits Elon Musk’s competing AI service Grok, which the Pentagon plans to grant access to classified military networks. Musk sided with Trump and claimed that Anthropic hates Western civilization. That Musk owns a direct competitor to Anthropic was not mentioned in this context, but it deserves to be noted.
More than one hundred employees at Google sent a letter to the company’s chief strategist demanding similar restrictions on how the Gemini models are used by the military. Employees at Microsoft and Amazon also spoke out demanding that their leadership stop the Pentagon’s unlimited access to their AI products.
So there is a silent revolt going on throughout the industry. Anthropic threw the first stone, and now everyone else knows what happens if you do it.
The conflict looks like a legal dispute over contract wording, but it concerns something fundamental: who should have the final say on how powerful AI is used in war. Should a private company be able to set limits on how the military uses their technology? Or is it a national security issue when business terms override operational decisions?
Senator Mark Warner, vice chairman of the Senate Intelligence Committee, criticized Trump’s actions and said it raises serious concerns about whether national security decisions are driven by careful analysis or political considerations.
This is a reasonable question. And the answer will likely take some time.






