• Skip to main content

Annielytics.com

I make data sexy

  • About
  • Tools
  • Blog
  • Tips
  • Portfolio
  • Contact

Feb 27 2026

Let’s Dissect Anthropic’s Stand Against the Pentagon

Image credit: TechCrunch via Wikimedia

Two days before Anthropic CEO Dario Amodei published his statement about Anthropic’s relationship with the Department of War (DoW), he released something else that put me on edge: the third version of Anthropic’s Responsible Scaling Policy (RSP).

The RSP is the voluntary framework Anthropic uses to govern how it develops and deploys increasingly capable AI. The first two versions were notable for their specificity and their willingness to draw hard lines. Version 3.0 read differently. It opened with an accounting of what he felt hadn’t worked. Publishing the RSP hadn’t pushed the rest of the industry toward stronger safety standards the way Anthropic had hoped, government action on AI safety had stalled, and some of the more ambitious safeguards Anthropic had envisioned for higher capability levels were, in Amodei’s words, “outright impossible to implement without collective action.”

The new version peeled apart two things that had previously been bundled together: what Anthropic was committing to do on its own and what it was recommending the industry do. The more ambitious safeguards—the ones Anthropic acknowledged might be impossible for a single company to implement—got moved into the recommendation column.

That’s an honest thing to admit, but I wondered if Amodei was cushioning responsible AI enthusiasts such as yours truly for the moment the brinkmanship with the DoW finally collapsed into a $200 million concession.

That’s not what happened.

Instead, yesterday Amodei published a pretty fair and levelheaded refusal to comply with government demands that Anthropic considers a bridge too far. No saber rattling or insults, just a line drawn in the sand rather unceremoniously. The DoW wants unfettered ‘any lawful use’ access to Claude, including the removal of two safeguards Anthropic had on its models that centered around safety. So Anthropic decided to walk away, knowing it faced the risk of being treated as an infidel. A pariah if you will.

But I want to slow down on those two safeguards because I think some of the nuance gets a bit lost in the noise.

Amodei’s Bridge Worth Dying On

Mass Domestic Surveillance

His statement was careful to distinguish between what Anthropic supports (i.e., lawful foreign intelligence and counterintelligence) and what it won’t enable (i.e., AI-powered surveillance of American citizens at scale). From his statement:

AI-driven mass surveillance presents serious, novel risks to our fundamental liberties.

The key word in Amodei’s framing is ‘novel’. What he seems to be saying is we don’t yet have good legal or institutional frameworks to handle the risks of AI-powered surveillance at scale. Surveillance has always existed, but AI changes its character so fundamentally, past experience doesn’t reliably predict what it may enable. The scale is different, the speed is different, the ability to synthesize unrelated data points into a comprehensive picture is different. And we already know that facial recognition software has a famously difficult time identifying people of color. A study done by the federal government found that African American and Asian faces were up to 100 times more likely to be misidentified than white faces. In some cases, traditional investigations are forgone altogether and arrests made on misidentifications using facial recognition. In one case in my own backyard, the man arrested was reportedly eight inches shorter and 70 pounds lighter. He was detained for two days before charges were dismissed. (I did my thesis on the dangers of facial recognition technology, which is partly why I monitored this development so closely.)

Here Amodei appears to be pointing to a capability gap between what the law currently anticipates and what AI can actually do. The government can already legally purchase location data, browsing records, and social graphs from commercial data brokers without a warrant. That practice is controversial, but it exists in a kind of gray zone because the data is scattered, imperfect, and expensive to synthesize at scale. AI that can automate inference across massive, disparate datasets closes that gap almost entirely. It can stitch those individually innocuous data points into a comprehensive portrait of any person’s life, automatically, at a scale no traditional investigation could replicate. He seems to be arguing that the law hasn’t caught up to that capability.

With that as a backdrop, Anthropic is declining to hand over a capability the law was never designed to anticipate.

Fully Autonomous Weapons

What’s interesting with this point is Amodei isn’t categorically opposed to autonomous weapons. The statement explicitly asserts that partially autonomous systems are vital to democratic defense and even concedes that fully autonomous weapons may ultimately prove critical to national security. What gives him pause is a concern over using frontier AI systems to select and engage targets without human oversight (aka ‘human in the loop’ or ‘human in the middle’). He feels that they are not reliable enough to power fully autonomous weapons. Anthropic even offered to work with the Department on research and development to improve that reliability. They declined.

So what Anthropic is actually cordoning off is the ability to use their models to assist in deploying automated weapons of destruction that recklessly cause a loss of human life that could’ve been spared if cooler heads had prevailed. That’s a product integrity argument as much as an ethical one. It’s the equivalent of a defense contractor refusing to ship a component they know hasn’t passed safety testing.

The Department’s response to all of this has been striking. The Trump administration has threatened to label the company a ‘supply chain risk’. It appears the intent is blackball Anthropic from government contracts. It’s notable that this label that has historically been reserved for adversarial nation-states, not American companies. They’ve also threatened to invoke the Defense Production Act to force safeguard removal. Amodei noted, drily, that these two threats are inherently contradictory, i.e., you can’t simultaneously label a company a security risk and declare its product essential to national security.

Touché Amodei

None of this changed Anthropic’s position.

My Take on This

What I found interesting—in a passive-aggressive way—is that same day Defense Secretary Pete Hegseth summoned Amodei to the Pentagon like a principal calling a student to his office over the school’s intercom, there was a leak that xAI had signed an agreement to allow the military to use its model, Grok, in classified systems, a Defense official confirmed to Axios. According to the article, up to that point Anthropic Claude had been the only model available in the systems on which the military’s most sensitive intelligence work, weapons development, and battlefield operations take place.

Image credit: Official White House Photo by Molly Riley via Wikimedia

So while California and various countries across the European Union, Asia, the U.K. are launching investigations into xAI’s grotesque proliferation of sexual deepfakes and child sexual abuse material (CSAM)—where the AI tool reportedly made roughly 3 million images over an 11-day period—the Pentagon inks a deal that will give that model access to classified systems? Outrageous.

I don’t know how this will end. The Department may follow through and offboard Anthropic entirely. xAI may actually become Anthropic’s replacement as Elon Musk appears to have no objection over the use of AI to carry out the wishes of the DoW. But for one news cycle at least, a major AI company looked down the barrel of a lucrative government contract and said the price of compliance was weightier. I’m not applying a halo effect here. Anthropic has had its own share of issues and legal snares. But for now, I’m giving Anthropic a big 👍 for standing by their convictions. Given how rarely that happens, it seemed worth taking a few minutes to understand exactly what they were refusing to do and why.

Written by Annie Cushing · Categorized: AI

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Copyright © 2026