Anthropic sues US government over supply chain risk label

Anthropic PBC sued the Defense Department for declaring that the artificial intelligence giant posed a risk to the US supply chain, further ramping up a high-stakes dispute with the Pentagon over safeguards on the company's technology.

Processing Content

San Francisco-based Anthropic is challenging a decision by the department and other federal agencies like the Federal Housing Finance Agency to shift their AI work to other providers, based on a risk designation typically reserved for companies from countries the US views as adversaries. 

Anthropic wants a judge to remove the supply-chain risk designation and require US agencies to withdraw directives related to it. The company claims it is being punished for disagreeing with the administration and argues the legal principles at stake affect every federal contractor whose views the government dislikes.

READ MORE: Fannie, Freddie join Anthropic ban, address DHS shutdown

"These actions are unprecedented and unlawful," Anthropic said in a complaint filed Monday in San Francisco federal court, adding that the company's business is being threatened. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech."

Last week, the Pentagon formally notified Anthropic of its determination. Chief Executive Officer Dario Amodei then issued a statement saying the government's actions were not "legally sound" and had left the company with "no choice but to challenge it in court."  

According to the complaint, the government's actions "are harming Anthropic irreparably," putting the company's contracts with private firms "in doubt" and potentially "jeopardizing hundreds of millions of dollars in the near-term."

There are likely to be "enormous" consequences for others, including on those "whose speech will be chilled; on those benefiting from the economic value the company can continue to create; and on a global public that deserves robust dialogue and debate on what AI means for warfare and surveillance," Anthropic said.

The dispute erupted last month, after the Pentagon wanted to use Claude for any purpose within legal limits — and without any usage restrictions from Anthropic. The firm had insisted that the chatbot not be used for mass surveillance against Americans or in fully autonomous weapons operations.

In response, Defense Secretary Pete Hegseth on Feb. 27 ordered the Pentagon to bar its contractors and their partners from any commercial activity with Anthropic. In a post on X, Hegseth set a six-month period for Anthropic to hand over AI services to another provider.

Trump blasted Anthropic the same day on his Truth Social network, saying "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War." In his post, the president directed US government agencies to stop using Claude.

The company's lawsuit names as defendants the Department of War — which the Trump administration uses to describe the Department of Defense — as well as more than a dozen other federal agencies. 

The Department of Defense didn't respond to a request for comment on the lawsuit.

Anthropic said in the complaint that it imposed "usage restrictions" based on the company's "unique understanding of Claude's risks and limitations — including Claude's capacity to make mistakes and its unprecedented ability to accelerate and automate analysis of massive amounts of data, including data about American citizens."

AI Technology

In the days after the department first announced its risk designation, consumers drove "unprecedented demand" for Anthropic's chatbot Claude in a show of support for the company's resistance to the government push for unfettered use of its technology. 

Meanwhile, rival OpenAI announced it had struck an agreement to let the Pentagon deploy its artificial intelligence models in its classified network. OpenAI chief Sam Altman later said he was working with the Defense Department to add more guardrails around surveillance.

Founded in 2021 by former OpenAI employees, Anthropic quickly cemented itself as a rival to the ChatGPT maker with Claude, which it billed as more safety- and business-focused. Today, the San Francisco-based company has more than 300,000 business customers who use its models to streamline workplace responsibilities, particularly in the field of computer programming where it has emerged as a market leader with its AI coding assistant, Claude Code.

Anthropic started the year on a winning streak, with surging sales, multiple viral products and a large funding round — all giving the startup a big advantage in the costly global AI race.

But its future is uncertain since its relationship with the Pentagon imploded in late February — just before the US attacked Iran in a major Middle East military operation.

Some legal and policy experts warned that the fallout from the government's declaration would be dire.

Jennifer Huddleston, a senior fellow at the Cato Institute, said the case goes well beyond a contracting dispute and posses a risk to freedom of speech. 

"The designation and attempts to blacklist the company from other aspects of the government go far beyond the scope of what would be considered least restrictive means even if there are security concerns about the further use of the product," Huddleston said in a statement. 

The case is Anthropic v. US Department of War, 26-cv-01996, US District Court, Northern District of California (San Francisco).


Bloomberg News
Artificial intelligence Mortgage technology FHFA
MORE FROM NATIONAL MORTGAGE NEWS