Pentagon Replies Back to Anthropic: Refusal to Accept Contract Terms Is Not Protected Speech, Agency Tells Court

The Pentagon, through the US Department of Justice, has pushed back against a lawsuit from Anthropic over a failed defense contract. In a 40-page filing dated March 17, 2026, the government argues that the dispute is about business terms, not free speech. It asks a federal judge in San Francisco to reject Anthropic’s request for a preliminary injunction.

At the center of the case is Anthropic’s refusal to accept contract terms that would allow the government to use its Claude models for “any lawful use.” Anthropic had asked for limits on how its AI could be deployed. It wanted to block uses such as mass surveillance or autonomous weapons. The Pentagon rejected those limits and insisted on full access within the bounds of the law.

The disagreement led to the collapse of a $200 million contract. Soon after, Defense Secretary Pete Hegseth labeled Anthropic a “supply chain risk.” This label is often used for firms tied to foreign threats. Anthropic argues that the label is unfair and damaging. It filed suit in early March, claiming the move violates its First Amendment rights and punishes the company for its views on AI safety.

The High-Stakes Battle Over the “Risk” Label of Anthropic

Anthropic says the fallout could cost it hundreds of millions of dollars. It points to lost contracts and concern among partners. The company also says the label harms its reputation in a fast-growing market where trust matters.

The government sees the case in a different light. In its filing, the DoJ calls Anthropic an “unacceptable” national security risk. It argues that giving the company access to military systems could create new dangers. These include sabotage, model tampering, or the risk that the AI may not perform as expected in combat if its internal limits are triggered.

Credits: Business Insider

The DoJ also stresses that federal agencies have wide freedom in procurement. They can choose which vendors to work with and on what terms. In this view, Anthropic’s claims of harm are uncertain and can be addressed through standard contract remedies. The government says there is no need for court action to pause the “supply chain risk” label.

The Precedent-Setting Clash in Federal AI Contracting

At the same time, the Pentagon has begun shifting its focus to other AI providers. Work is already underway with companies such as Google, OpenAI, and xAI. This move suggests the department does not plan to wait for the dispute to resolve before advancing its AI efforts.

Anthropic, for its part, says it remains committed to national security work. It argues that its proposed limits reflect responsible design, not resistance to government needs. The company says its lawsuit aims to protect its business and partners from what it sees as unfair treatment.

The legal process will move quickly. Anthropic must file its response to the government’s arguments by March 20, 2026. A hearing on the preliminary injunction is set for March 24. The judge will decide whether to pause the “supply chain risk” designation while the case moves forward.

The outcome could shape how AI companies and the government work together. It raises a key question: how much control should a vendor keep over the use of its technology once it enters a federal contract.

Comments are closed.