On Friday, Defense Secretary Pete Hegseth characterized the artificial intelligence firm Anthropic as a “supply chain risk to national security,” amid escalating public tensions regarding the company’s attempts to impose restrictions on the Pentagon’s deployment of its technology.
Hegseth proclaimed on X that, effective immediately, “no contractor, supplier, or partner engaged with the United States military may engage in any commercial operations with Anthropic.” This directive could reverberate across a multitude of firms that collaborate with the Pentagon.
“America’s warfighters shall never be subjected to the ideological caprices of Big Tech. This decision is irrevocable,” Hegseth asserted.
Earlier on the same day, President Trump mandated that all federal agencies “immediately” cease any involvement with Anthropic; however, the Defense Department and select agencies may continue utilizing its AI technology for a transitional span of six months.
In a statement released Friday, Anthropic pledged to “contest any designation of supply chain risk in a court of law,” deeming the action “legally untenable” and cautioning that it could establish a “perilous precedent for any American enterprise engaging with the government.”
The company contended that Hegseth lacks the legal jurisdiction to prohibit military contractors from collaborating with Anthropic, positing that a risk designation applies only to contractors’ engagements with the Pentagon.
“Labeling Anthropic as a supply chain risk would signify an unprecedented move—one typically reserved for U.S. adversaries, never before publicly directed at an American corporation,” stated Anthropic.
The company also mentioned it had “yet to receive direct communication from the Department of War or the White House regarding the status of our negotiations.”
On Friday evening, OpenAI CEO Sam Altman disclosed via social media that his organization had “finalized an agreement with the Department of War to integrate our models into their classified network.”
“Two of our core safety mandates are categorical bans on domestic mass surveillance and human oversight in the application of force, including autonomous weapon systems.
The DoW concurs with these tenets, enshrined in law and policy, and we have incorporated them into our agreement,” Altman stated, further urging the Defense Department “to extend these same conditions to all AI companies, which we believe should be universally acceptable.”
The decision to sever ties with Anthropic followed a contentious dispute with the Pentagon, illuminating fundamental disagreements concerning the implications of AI technology in national security and the potential hazards it presents.
Anthropic, uniquely the sole AI firm whose model operates on the Pentagon’s classified networks, has advocated for safeguards preventing its technology from being harnessed for domestic surveillance or for executing military actions sans human authorization.
Conversely, the Pentagon has insisted that any agreement permit the use of Anthropic’s Claude model for “all lawful purposes.”
The Pentagon had set a deadline for Anthropic, compelling them to finalize an agreement by Friday at 5:01 p.m. or risk forfeiting lucrative military contracts.
The military maintains that it is already prohibited from conducting mass surveillance of American citizens and that internal regulations restrict the deployment of fully autonomous weapon systems.
As negotiations faltered this week, Pentagon officials publicly accused Anthropic of attempting to impose its perspectives upon the military.
Hegseth labeled Anthropic as “sanctimonious” and arrogant, accusing the firm of trying to “bully the United States military into capitulation.”
“Their underlying intent is unmistakable: to acquire veto authority over the operational decisions of the United States military. Such an outcome is intolerable,” Hegseth asserted.
However, Anthropic CEO Dario Amodei countered that these safeguards are essential, arguing that Claude lacks the infallibility necessary to function as autonomous weapons and that a potent AI framework could pose significant privacy dilemmas.
He emphasized the company’s understanding that military decisions are within the purview of the Pentagon, asserting they have never sought to restrict the use of their technology “arbitrarily.”
“Nonetheless, in certain contexts, we contend that AI may undermine, rather than uphold, democratic principles,” Amodei expressed in a statement Thursday. “Certain applications simply lie beyond the reach of what current technology can execute safely and reliably.”
Amodei has been a vocal advocate for years concerning the prospective dangers posed by unregulated AI technologies, advocating for measures aimed at safety and transparency.
Anthropic persisted in its stance late Friday, asserting, “No form of intimidation or punitive measures from the Department of War will alter our position on domestic mass surveillance or entirely autonomous weapon systems.”
“We are profoundly distressed by these developments,” Anthropic stated. “As the pioneering AI enterprise to deploy models within the United States government’s classified networks, Anthropic has supported American warfighters since June 2024 and intends to continue doing so.”
On Thursday, a day prior to the military’s ultimatum, the Pentagon’s chief technology officer, Emil Michael, informed CBS News that the Department of War had made concessions, providing written assurances that it hewed to federal statutes and internal military policies restricting mass surveillance and autonomous weaponry.

“At a fundamental level, one must trust the military to undertake appropriate actions,” Michael noted, adding, “We will never assert that we cannot defend ourselves in writing to a company.”
Anthropic deemed that the offer was insufficient. A spokesperson for the company remarked that the revised language was “paired with legal stipulations that would permit those safeguards to be disregarded at will.”
Source link: Cbsnews.com.





