Anthropic Takes Action Against the Department of Defence
On Monday, Anthropic initiated dual lawsuits against the Department of Defence (DoD), arguing that the government’s classification of the AI firm as a “supply chain risk” constitutes an unlawful act, infringing upon its First Amendment rights.
This legal skirmish arises from a protracted and contentious dispute concerning Anthropic’s endeavours to establish safeguards against the military’s potential misuse of its AI models for expansive domestic surveillance or fully autonomous lethal weaponry.
Filed in both the Northern District Court of California and the U.S. Court of Appeals for the Washington D.C. Circuit, these lawsuits follow the Pentagon’s unprecedented issuance of the supply chain risk designation last Thursday.
This marks the inaugural application of such a blacklisting mechanism against a U.S. enterprise. Anthropic had previously asserted its intention to contest this designation, along with the directive compelling any government contractor to sever ties with the company, posing a substantial threat to its business operations.
The lawsuits assert that the Trump administration is retaliating against Anthropic for its noncompliance with governmental ideological expectations, constituting a violation of its rights to protected speech.
“These actions are unprecedented and unlawful. The Constitution prohibits the government from leveraging its immense power to penalise a corporation for its protected speech,” Anthropic claimed in its California filing.
Anthropic’s AI model, known as Claude, has seen extensive integration within the Department of Defence over the past year. Notably, it was the singular AI model sanctioned for utilisation in classified systems until recently.
Reports suggest that the DoD has extensively employed Claude in military operations, including decisions regarding missile strike targets during its conflict with Iran.
In its lawsuit, Anthropic reaffirmed its commitment to supplying AI for national security purposes. The company noted that it had previously collaborated with the DoD to tailor its systems for specific use cases and expressed a desire to continue negotiations with the government.
“Pursuing judicial review does not undermine our enduring pledge to leverage AI in safeguarding national security; however, it is an essential measure to protect our business, clients, and partners,” a spokesperson for Anthropic articulated in a statement to the Guardian.
“We will relentlessly seek resolutions through various avenues, including dialogues with the government.”
The firm alleges that the punitive measures enacted by the Trump administration and the Pentagon are “irrevocably damaging Anthropic,” a statement that seems to stand in contrast to comments made by Dario Amodei, Anthropic’s CEO, who recently told CBS News that “the impact of this designation is fairly small” and that the company would “be fine.”

“Defendants are striving to obliterate the economic value produced by one of the world’s most rapidly expanding private companies, a leader in the responsible development of this pivotal technology for our nation,” the lawsuit contends.
The Department of Defence has yet to respond to requests for commentary regarding this matter.
Source link: Theguardian.com.






