Anthropic Sues Pentagon Over Supply Chain Risk Label

Anthropic Takes Legal Action Against U.S. Defense Department

Last Updated:
Anthropic Takes Legal Action Against U.S. Defense Department
  • Anthropic sues the U.S. Defense Department after being labeled a “supply chain risk.”
  • Lawsuit claims the designation violates First Amendment rights and harms contracts.
  • AI industry employees back Anthropic over concerns about surveillance and autonomous weapons.

Artificial intelligence company Anthropic has filed two lawsuits against the U.S. Department of Defense, challenging a recent government decision that labeled the firm a “supply chain risk.” The legal action follows a formal designation issued by the Pentagon last week that effectively bars government contractors from using Anthropic’s technology.

According to reports, the company argues that the measure was unlawful and violated constitutional protections, and that it poses major risks to its existing and future business relationships.

Anthropic Challenges Government Blacklisting

Anthropic filed its lawsuits Monday in the U.S. District Court for the Northern District of California and the U.S. Court of Appeals for the Washington, D.C. Circuit. The legal filings came days after the Department of Defense formally designated the company as a supply chain risk, a classification that the firm says has never previously been applied to a U.S.-based company.

The designation followed an announcement by the Pentagon that organizations conducting business with the federal government must cease using Anthropic’s AI systems. According to the company, the directive threatens its commercial relationships with firms that also work with federal agencies.

In the California lawsuit, Anthropic contends that the government’s actions constitute retaliation against the company for declining to meet what it described as ideological demands. The filing states that the designation infringes on the company’s First Amendment rights and exceeds the authority of the executive branch.

Anthropic said the government’s decision could jeopardize hundreds of millions of dollars in private contracts and create uncertainty around future partnerships.

AI Safeguards and Military Use at Center of Dispute

The conflict stems from disagreements over how Anthropic’s AI systems may be used by the U.S. military. The company has sought to implement safeguards intended to prevent its models from supporting domestic mass surveillance programs or operating fully autonomous lethal weapons systems.

Anthropic’s flagship AI model, Claude, has been integrated into Department of Defense systems during the past year and was previously the only AI model approved for use in classified environments. According to reports cited in the lawsuit, the technology has been used in certain military operations, including assisting with targeting decisions in the ongoing conflict involving Iran.

Despite the dispute, Anthropic stated that it remains committed to supporting national security initiatives and has previously worked with the Department of Defense to adapt its technology for specialized use cases.

The legal dispute has drawn attention from within the artificial intelligence sector. Nearly 40 employees from companies including Google and OpenAI submitted a court brief supporting Anthropic’s efforts to limit certain uses of advanced AI systems.

Related: US Military Used Anthropic’s Claude in Iran Strikes Hours After Trump Ban

Disclaimer: The information presented in this article is for informational and educational purposes only. The article does not constitute financial advice or advice of any kind. Coin Edition is not responsible for any losses incurred as a result of the utilization of content, products, or services mentioned. Readers are advised to exercise caution before taking any action related to the company.