AI Culture

Every Major AI Lab Said Yes to the Pentagon. Except Anthropic.

Riley Torres ·

You have to appreciate the symmetry here. Anthropic spent three years building a very specific brand. Not just another AI company. The careful one. The responsible one. The startup founded by people who left OpenAI specifically because they thought OpenAI wasn't taking safety seriously enough.

And now Anthropic is the only major AI lab reportedly in an active standoff with the Pentagon. Because they won't agree to let the US military use their chatbot for anything the government decides is lawful.

Three words. "Any lawful use." That's the clause. That's the whole thing.

According to reporting from The Verge and TechCrunch, the Department of Defense has been asking AI companies to accept terms allowing military use of their systems for anything the government deems lawful. OpenAI reportedly signed. xAI reportedly signed. Anthropic has reportedly refused.

Here's what "any lawful use" means in plain English: the US military can legally develop and deploy lethal autonomous weapons. It can legally conduct mass surveillance on civilian populations. It can legally do a lot of things that would make most people extremely uncomfortable if they found out their AI assistant was powering them. The clause would make it nearly impossible for Anthropic to push back on specific military applications of Claude even if those applications crossed every ethical line the company publicly claims to care about.

So Anthropic said no.

Defense Secretary Pete Hegseth apparently did not enjoy hearing that. He reportedly summoned CEO Dario Amodei to the Pentagon for a direct conversation. When that apparently didn't produce the desired result, the Department of Defense escalated: it threatened to designate Anthropic a "supply chain risk."

Let me sit with that for a second.

"Supply chain risk" is a designation the US government typically reserves for adversarial foreign vendors. Huawei. Companies suspected of being conduits for hostile intelligence services. Applying it to an American AI safety lab because that lab won't sign a clause that other AI companies signed without apparent hesitation is, to put it mildly, an unusual use of the designation.

It's also effective. That label would effectively freeze Anthropic out of federal contracting and put pressure on every government-adjacent organization that partners with them. The threat is designed to make holding out feel existential, because for a company that needs to grow revenue and partnerships, it might be.

Now. Let me tell you why this story is more interesting than another AI versus regulator standoff.

It's who blinked first. Or rather: who didn't blink at all.

OpenAI's trajectory has been well-documented. They launched as a nonprofit safety research lab. They became a capped-profit company. They've spent the last several years making a series of pragmatic compromises in pursuit of commercial scale. By the time they reportedly accepted the DoD's "any lawful use" terms, nobody was surprised. The shock would have been if they'd refused.

xAI was never safety-first to begin with. Grok's entire brand identity is fewer restrictions, not more. Their signing was expected.

But Anthropic is different. Or at least, that's the claim. Their founding story is literally that they walked away from OpenAI because it wasn't taking safety seriously enough. Their governance structure is specifically designed to make safety commitments harder to abandon under commercial pressure. They publish safety research. They have a Responsible Scaling Policy. Claude's usage policy is one of the more detailed public documents of its kind in the industry.

When Anthropic says no to the Pentagon, it means something. Or it tests whether it means something.

There are two ways to read this standoff, and both are plausible.

The cynical read: this is sophisticated positioning. "We're the only AI company that stood up to the Pentagon" is worth more in brand value than almost any federal contract. Anthropic is playing a long game, betting that a differentiated safety brand is worth defending in public, while knowing that a quiet resolution might come later on slightly different terms with much less press coverage.

The generous read: they actually mean it. The people who left OpenAI over safety are refusing to let the DoD define "safe" for them, because that's what they built this company to do.

Here's the uncomfortable part: both can be true simultaneously. You'd need to see the final resolution to know which reading dominates.

What doesn't change regardless of how this ends: the list of AI systems that have agreed to broad military use terms is already long. If you're wondering which AI tools will be embedded in the next generation of defense decision-making infrastructure, the answer is probably most of them. Anthropic holding out doesn't reverse that trend.

We've covered before who actually bears responsibility when AI does something the public didn't sanction. The answer from the industry, so far, has consistently been: not us, not really, here are our usage policies. The Pentagon standoff is a version of that same question, scaled up. If Claude ends up in a military system that causes harm, and Anthropic signed "any lawful use" to get there, the usage policy defense gets substantially harder to make.

Anthropic knows this. That's probably part of why they're holding out.

Whether that's courage or calculation, it's at least something different from what every other major AI lab has done.

The other companies skipped the exam. Anthropic is the only one still sitting at the desk.

#anthropic#pentagon#military-ai#ai-safety#claude