
You have built the world’s safest AI, landed a $200 million contract with the US Department of Defense, and then the Secretary of Defense knocks on your door with an ultimatum. Remove all safety locks, or we pull out. That is the situation for Anthropic right now, and it is starting to look more like a bad action movie than business negotiations.
Anthropic, the company behind the AI model Claude, has long wanted to be the serious player in the AI industry. While competitors like OpenAI, Google, and Elon Musk’s xAI have readily bowed to the military’s demands, Anthropic has stuck to two things. No weapons systems operating completely without human control and no mass surveillance of American citizens. Quite reasonable really. Not according to the Pentagon.
On Tuesday, February 24, Secretary of Defense Pete Hegseth and Anthropics CEO Dario Amodei entered a room at the Pentagon. An insider described the meeting beforehand as a shit or get off the pot meeting, which is probably the most straightforward diplomatic phrasing in a long time. Hegseth put an ultimatum on the table with a deadline on Friday. Sign a document giving the military free access to Claude, or you will pay the price.
Ironically, the meeting was said to be strangely polite. Amodei even thanked Hegseth for his service. No voices were raised. But when they left the room, no one had changed their mind at all.
Here lies the entire irony of the story. Claude is currently the only AI model approved for the military’s most classified networks. Not a single competitor is anywhere near that status yet. A Pentagon official expressed it with an honesty that is almost touching. The only reason we are still talking to these people is we need them and we need them now. The problem for these guys is they are that good.
The Pentagon is thus threatening to fire the only supplier they cannot do without. It is a bit like threatening your heart surgeon with dismissal in the middle of an ongoing operation.
Hegseth and the Trump administration have labeled Anthropics’ safety rules as woke AI. AI experts argue that the term is essentially meaningless and is used to describe everything from technical safety mechanisms to alleged political bias in the responses. What is actually wanted is an AI that the military can use for all lawful purposes, a term broad enough to cover most things one can think of.
The Pentagon has presented three options:
Friday is the deadline. Amodei has made it clear that Anthropic can adjust its policy but will never accept mass surveillance or weapon systems without human control. What the Pentagon calls unreasonable restrictions are exactly the reasons that led Amodei to leave OpenAI and start Anthropic in the first place.
AI already plays a central role in military decisions, that discussion is over. What remains is the question of who sets the boundaries for how the technology is used and whether it should be elected politicians or tech companies who have the final say when it comes to autonomous weapons systems and citizen surveillance.






