
The federal government tried to brand Anthropic, an American AI company, a national security threat for refusing to build surveillance and weapons tools. A federal judge looked at that argument and called it what it is: retaliation.
On Friday, Judge Rita F. Lin of the Northern District of California granted Anthropic a preliminary injunction, temporarily blocking the government from banning its products for federal use and from maintaining the ‘supply chain risk’ designation the Defense Department slapped on the company. The ruling isn’t a final decision, but the language in it is striking and worth sitting with.
To recap how we got here, Anthropic declined to modify its contract to allow the government to use Claude for mass surveillance and the development of autonomous weapons. Those are exactly the kinds of applications an AI safety-focused company should push back on, and Anthropic did. The Trump administration responded by ordering federal agencies to drop all Anthropic services, and Pete Hegseth took it a step further, warning other companies to sever ties with Anthropic if they wanted to keep their own government contracts.
Then came the ‘supply chain risk’ label. That designation exists specifically to flag entities that pose genuine national security threats, usually foreign adversaries. China-linked companies. Hostile state actors. Not a San Francisco AI lab that said no to a contract clause.
Judge Lin wasn’t buying it. She wrote that the government’s measures “appear designed to punish Anthropic” and called out the government’s motives with pretty severe language:
Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government.
She added that punishing Anthropic for drawing public attention to the government’s contracting position is “classic illegal First Amendment retaliation.” The Defense Department, for its part, had argued in a court filing that giving Anthropic continued access to war-fighting infrastructure would introduce “unacceptable risk” to its supply chains. The judge was not persuaded.
There’s also a broader industry dimension here. Hegseth’s warning to other companies to cut ties with Anthropic was designed to isolate the company commercially, making the cost of dissent high enough that no one else would risk it. That kind of economic pressure as a tool of political coercion is exactly what the First Amendment is meant to guard against, and it’s encouraging to see the courts treat it that way.
Anthropic said it’s “grateful to the court for moving swiftly” and is focused on working productively with the government. That’s a measured response, which makes sense given that the lawsuit is still live and the final ruling hasn’t come down.
What this case ultimately tests is whether federal agencies can weaponize national security language to punish companies for maintaining ethical limits on their technology. The early read from the bench is that they cannot. That’s a meaningful checkpoint, regardless of how the rest of the case unfolds.
Leave a Reply