Anthropic has moved its showdown with the Pentagon into court, arguing that the government’s supply-chain risk designation was an unlawful retaliation campaign that jeopardized the AI business.
The suit seeks to void the designation imposed after Anthropic refused to revise Claude’s usage policy to permit what it described as mass surveillance and fully autonomous weapons use. Anthropic said it does not believe Claude would function safely or reliably in those applications.
The Pentagon has denied that it planned to use Claude for those purposes and had previously insisted the company accept “any lawful use” of its tools to support the US military.
The legal filing follows the administration’s escalation. President Donald Trump and Defense Secretary Pete Hegseth directed federal agencies to cut ties with Anthropic and declared that, “effective immediately,” no contractor, supplier, or partner doing business with the US military could conduct commercial activity with the company.
Anthropic argues that the government is punishing it for First Amendment-protected speech and viewpoint rather than acting through lawful procurement authority.
The financial stakes extend well beyond the Pentagon award itself. Anthropic alleges that after the rift, government officials contacted some of its partners and those companies delayed or paused national security contracts and business engagements already in active development. In its complaint, Anthropic says the fallout has already put “millions, possibly billions, of dollars at risk,” including risking a Pentagon contract worth up to $200 million.
Bloomberg said Anthropic’s annualized revenue crossed $19 billion, up from $9 billion at the end of 2025 and above roughly $14 billion recorded only weeks earlier. Claude Code reportedly drove much of that growth. Separate reporting also said Anthropic had more than 500 customers spending over $1 million annually.
The designation used by the government had historically been used against foreign adversaries such as Huawei and had never before been publicly applied to an American company. Anthropic’s case argues that no federal statute authorizes the government to use that power as a punishment mechanism for a domestic company that refused contract terms.
Industry groups and lawmakers had already questioned whether the required notice, response opportunity, and less intrusive alternatives were satisfied.
Operationally, the government’s own dependence on Claude complicates the case. So far, Claude had been the only AI model approved and onboarded for use on classified military networks and remained deeply embedded through Palantir. Defense officials reportedly acknowledged that removal would be a significant undertaking.
Anthropic had also said it would support a smooth transition if offboarding occurred.
Hours after Anthropic was blacklisted, OpenAI struck a Pentagon deal that preserved the same prohibitions on autonomous weapons and domestic mass surveillance that Anthropic had fought to keep.
The latest filing also lands after signs of de-escalation where CEO Dario Amodei said Anthropic had resumed discussions with the Pentagon and was trying to reach an arrangement that worked for both sides.
Information for this story was found via The Epoch Times and the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.