WASHINGTON (AP) — On Friday, the Trump administration mandated that all U.S. agencies cease utilizing Anthropic’s artificial intelligence technology, imposing significant penalties that underscore an unprecedented public confrontation between the government and the company concerning AI safety.
President Donald Trump, Defense Secretary Pete Hegseth, and various officials took to social media to condemn Anthropic for its failure to comply with a Friday deadline, which required unrestricted military use of its AI technologies.
They claimed that CEO Dario Amodei’s intransigence jeopardized national security, as he resisted pressures to allow applications that could contravene the company’s own safety protocols.
“We don’t need it, we don’t want it, and will not do business with them again!” Trump asserted emphatically on social media.
Hegseth labeled the company a “supply chain risk,” a classification typically reserved for foreign threats, potentially imperiling critical partnerships with other enterprises.
Anthropic responded, arguing that “designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for U.S. adversaries, and never previously applied to an American company.”
The organization further stated that such a designation would be both legally dubious and foster a perilous precedent for any American company engaging with government entities.
Anthropic had sought specific assurances from the Pentagon that its AI chatbot, Claude, would not be employed for mass surveillance of U.S. citizens or integrated into fully autonomous weaponry. Although the Pentagon maintained it would only employ the technology within lawful confines, it insisted on unrestricted access.
This governmental effort to dictate internal corporate decisions emerges amidst a broader discourse on AI’s function in national security, as apprehensions proliferate regarding how increasingly proficient machines could be deployed in situations fraught with lethal implications.
Trump and Others Criticize Anthropic
Trump criticized Anthropic for what he described as an error in attempting to coerce the Pentagon. He declared on Truth Social that most agencies must immediately terminate their usage of Anthropic’s AI, granting the Pentagon a six-month period to transition away from technology already integrated into military systems.
“The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!” he wrote emphatically.
Following months of private negotiations that erupted into a public dispute, Anthropic contended that the government’s new contractual language would enable “safeguards to be disregarded at will.” Amodei remarked that his organization “cannot in good conscience accede” to such demands.
While Anthropic can withstand the loss of the contract, the government’s actions could pose risks against the backdrop of the company’s meteoric ascent from an obscure research lab in San Francisco to one of the globe’s most valuable startups.
The president’s decision unfolded after hours of criticism from top appointees within the Trump administration, particularly at the Pentagon and State Department, but their complaints were riddled with inconsistencies.
Sean Parnell, a prominent Pentagon spokesman, argued on social media that Anthropic’s reluctance to comply was “jeopardizing critical military operations and potentially putting our warfighters at risk.”
Hegseth asserted Friday that the Pentagon “must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.”
Trump’s post also instructed Anthropic to “better get their act together, and be helpful” during the six-month phase-out or face “major civil and criminal consequences to follow.”
Nevertheless, Hegseth’s designation of Anthropic as a supply chain risk employs an administrative mechanism usually aimed at companies owned by adversaries to safeguard American interests.
Senator Mark Warner, Virginia’s leading Democrat on the Senate Intelligence Committee, remarked that this interplay, “coupled with inflammatory rhetoric targeting the company, raises grave concerns about whether national security decisions stem from analytical rigor or political machinations.”
Anthropic did not immediately provide a comment concerning the Trump administration’s actions.
Dispute Disturbs Silicon Valley
The confrontation astonished AI developers in Silicon Valley. Venture capitalists, influential AI scientists, and numerous employees from rival firms such as OpenAI and Google rallied in support of Amodei’s position through open letters and various forums.
This situation may bolster Elon Musk’s competing chatbot, Grok, which the Pentagon intends to grant access to classified military networks.
It may also serve as a cautionary signal to Google and OpenAI, both of which maintain evolving contracts to furnish their AI tools to military applications.
Musk aligned himself with the Trump administration, asserting on his social media platform that “Anthropic hates Western Civilization.”
Conversely, one of Amodei’s staunchest competitors, OpenAI CEO Sam Altman, sided with Anthropic. He articulated concerns regarding the Pentagon’s “threatening” maneuver during a CNBC interview, positing that OpenAI shares similar red lines. Amodei had previously been a part of OpenAI before co-founding Anthropic in 2021.
“For all the differences I have with Anthropic, I mostly trust them as a company, and I believe they genuinely prioritize safety,” Altman commented in a CNBC interview, shortly before gathering employees for an all-hands meeting.
Retired Air Force General Jack Shanahan, a former leader of the Pentagon’s AI initiatives, expressed on social media that while targeting Anthropic may yield sensational headlines, “everyone loses in the end.”

Shanahan pointed out that Claude is already employed broadly across the government, including in classified contexts, and described Anthropic’s reservations as “reasonable.”
He added that the AI large language models powering chatbots like Claude, Grok, and ChatGPT are “not ready for prime time in national security contexts,” particularly for fully autonomous weaponry.
Anthropic is “not trying to play cute here,” he articulated on LinkedIn. “You won’t find a system with wider & deeper reach across the military.”
Source link: Applevalleynewsnow.com.






