The United States government has established stringent protocols for civilian contracts with artificial intelligence firms, mandating these organizations to permit “any lawful use” of their models by military entities, as reported by the Financial Times.
These fresh guidelines emerge following the Pentagon’s high-profile confrontation with Claude AI producer Anthropic concerning the potential employment of its models in automated weaponry and extensive domestic surveillance initiatives.
This conflict culminated in the Department of Defense categorizing the company, led by CEO Dario Amodei, as a Supply Chain Risk (SCR), effectively barring it from all defense-related agreements.
The regulations aim to bolster the acquisition of AI services for the federal government. A source revealed to the publication that the Department of Defense is contemplating similar stipulations for its upcoming military contracts.
The U.S. General Services Administration (GSA), responsible for procuring software for the federal government, will be “soliciting additional feedback” from the industry prior to the implementation of these new protocols, as per the source’s insights.
Trump administration enacts stringent regulations for civilian AI contracts
According to a draft of the new guidelines from the GSA, AI companies aspiring to engage with the U.S. government will be required to grant an irrevocable license permitting the government to utilize their models for all legitimate purposes.
The updated rules will compel contractors to furnish “a neutral, non-partisan tool that exhibits no bias towards ideological tenets such as diversity, equity, and inclusion.” This stipulation echoes former President Donald Trump’s executive order addressing purported “woke” AI frameworks.
Further, the language within the guidelines could challenge adherence to the EU Digital Services Act, which obligates models to reveal whether they have been “modified or configured to comply with any non-U.S. federal government or commercial compliance or regulatory framework,” as noted in the report.
Pentagon vs Anthropic: ‘No punitive measures.’
U.S. Undersecretary of Defense Emil Michael articulated on a podcast that Anthropic’s classification as a Supply Chain Risk (SCR) stemmed from the firm’s inability to satisfy the Pentagon’s requisites for “all lawful use” of its technology.
In defending the decision, Michael stated that prior negotiations with Anthropic revolved around applications tied to Trump’s Golden Dome initiative, scenarios involving engagements against Chinese hypersonic missiles, and drone swarm tactics.
According to Michael, Anthropic and Amodei’s ethical framework diverged sharply from governmental exigencies.
“I require a dependable, consistent partner who can furnish technology conducive to autonomous applications—because one day that will become a reality, and we are already witnessing initial iterations of that. I need assurance that they will not falter in the process,” he remarked.
Michael further declined to characterize the decision as punitive, stating, “I do not see it as punitive. If their model carries this policy bias, I cannot permit Lockheed Martin to utilize it for weapon design… I cannot operate under such conditions, for I do not trust the potential outputs due to their entrenchment in their ideological preferences,” he emphasized.
Anthropic to challenge SCR classification in court
The Department of Defense announced on Thursday its decision to designate Anthropic as SCR, a classification that was corroborated by Amodei in a blog post asserting that the action lacks a legal foundation.
“On March 4, Anthropic received notification from the Department of Defense confirming our status as a supply chain risk to national security. We contend that this designation is not legally justified; thus, we are left with no alternative but to contest it in court,” he stated.

Key Takeaways
- The U.S. government is tightening regulations on AI companies to ensure practices favoring military operations.
- Firms like Anthropic may face significant hurdles if they fail to conform to government stipulations.
- The guidelines could complicate compliance with international frameworks such as the EU Digital Services Act.
Source link: Livemint.com.




