U.S. Military Designates Anthropic as Supply Chain Risk
The U.S. military has officially categorized the artificial intelligence firm Anthropic as a supply chain risk, according to a senior official from the Pentagon and sources familiar with the matter. This significant move risks severing the company’s military contracts.
Currently at an impasse, the Trump administration and Anthropic, the sole AI entity deployed within the Pentagon’s classified networks, are embroiled in a dispute over the company’s desire for stringent safeguards.
These safeguards would preclude the military from utilizing its Claude model to execute mass surveillance on American citizens or to facilitate fully autonomous weaponry.
The Pentagon maintains that it requires the latitude to employ Claude for “all lawful purposes,” asserting that such applications of AI are already prohibited.
In a recent announcement, Defense Secretary Pete Hegseth indicated that Anthropic’s government contracts would be terminated, designating the firm as a supply chain risk, yet official notification of this decision reached Anthropic only on Thursday.
Hegseth stated that the military intends to systematically phase Anthropic out over the next six months. However, a source disclosed to CBS News that no specific timeline for the decommissioning of Claude has been established.
Reports indicate that the U.S. military has recently utilized Claude during its operations against Iran that commenced last weekend. The precise manner of deployment for this AI model remains ambiguous.
A mere two days prior to the risk designation, CEO Dario Amodei expressed to investors that he was still engaged in discussions with the Pentagon aimed at “de-escalating the situation.”
During a Morgan Stanley conference, Amodei insisted that the two entities “have much more in common than we have differences,” according to audio obtained exclusively by CBS News.
Anthropic has previously signaled its intent to legally contest any classification as a supply chain risk, deeming the label “legally unsound” and cautioning that it would establish a “dangerous precedent” for any American corporation negotiating with the government.
In a last week’s interview, Amodei reaffirmed his company’s commitment to collaborating with the military to safeguard U.S. national security interests, while remaining resolute on the necessity for firm guardrails.
He articulated concerns that AI could provide the government with excessive surveillance capabilities that contradict “American values,” further arguing that AI lacks the necessary precision for fully autonomous systems that operate without human oversight. In his assessment, current legislation lags significantly behind technological advancements.
“We have laid down two fundamental red lines,” Amodei stated. “These have been our guiding principles from the outset, and we will not compromise on them.”
The Pentagon contends that the military is already legally barred from engaging in mass surveillance of Americans, and that internal Department of Defense constraints adequately restrict fully autonomous weaponry; thus, formal written restrictions on these AI applications are deemed unnecessary.
Emil Michael, the Pentagon’s chief technology officer, remarked in a CBS News interview last week, “At some level, you must place your trust in the military to act appropriately.”
However, he added that “we cannot guarantee that we will refrain from defending ourselves in documentation to a corporation.”
Michael indicated last week that the Pentagon proposed a compromise acknowledging existing laws and policies that limit mass surveillance and autonomous weaponry.
Anthropic dismissed these concessions as inadequate, asserting that such offers were laden with “legalese” that effectively allowed the military to sidestep the desired guardrails.
The dispute escalated markedly last week, with Trump administration officials accusing Anthropic of trying to impose its values on the military and curtail its operations.
Hegseth labeled Anthropic “sanctimonious,” while Michael criticized Amodei as possessing a “God-complex.” Additionally, Mr. Trump condemned the company as “radical left” and “woke.”
The Trump administration issued Anthropic a deadline last Friday to permit the military to utilize Claude for “all lawful purposes.”
With negotiations still stalled, Mr. Trump ordered federal agencies to cease using Claude immediately, providing the Defense Department with up to six months to phase out the technology.
In a notable counterpoint, Anthropic’s competitor, OpenAI, recently announced a partnership with the military.

“At its core, this issue revolves around one fundamental principle: ensuring the military can leverage technology for all lawful purposes,” stated a senior Pentagon official to CBS News on Thursday.
“The military will not permit a vendor to intrude upon the command structure by imposing restrictions on the lawful application of critical capabilities, thereby endangering our warfighters.”
Amodei has vocally condemned the Trump administration’s decision as “retaliatory and punitive.”
When asked last week by CBS News if he had a message for Mr. Trump, Amodei responded, “Every action we’ve undertaken has been for the betterment of this nation and to bolster U.S. national security.”
“Disagreement with the government epitomizes the essence of American democracy,” he remarked. “We are patriots committed to upholding the values that define this country.”
Source link: Cbsnews.com.






