Anthropic’s Contract Negotiations with Pentagon Encounter Obstacles
Negotiations between Anthropic PBC and the Pentagon concerning the extension of their contract have reportedly encountered challenges related to proposed safeguards intended for the company’s Claude system, according to a source familiar with the discussions.
Anthropic is advocating for the implementation of protection measures designed to avert the use of Claude for mass surveillance of American citizens or for the advancement of autonomous weaponry, as disclosed by the anonymous insider, as reported by News.Az, citing Bloomberg.
Conversely, the Pentagon is seeking the latitude to deploy Claude as long as it adheres to legal requirements.
This contention underscores the escalating unease regarding the application of cutting-edge AI systems within military and intelligence contexts, especially concerning arms development and extensive data aggregation.
Anthropic, which asserts a commitment to safety-centric AI development aimed at mitigating catastrophic consequences, created Claude Gov specifically for United States national security agencies, intending to uphold ethical principles while servicing governmental clients.
Claude Gov boasts enhanced functionality for managing classified information, deciphering intelligence data, and scrutinizing cybersecurity threats.
“Anthropic is dedicated to leveraging advanced AI to bolster U.S. national security,” a spokesperson for the company stated, emphasizing their ongoing “constructive discussions, conducted in good faith” with the Defense Department regarding the complexities at hand.
Pentagon spokesperson Sean Parnell corroborated the ongoing review of the relationship, remarking, “The relationship between the Department of Defense and Anthropic is currently under assessment.”
He further noted, “Our nation necessitates that partners are ready to assist our service members in any combat situation. This ultimately pertains to the welfare of our troops and the security of the American populace.”
Some defense officials reportedly regard Anthropic as a potential risk to the supply chain, prompting the Defense Department to possibly mandate that vendors certify they do not depend on Anthropic’s models, according to a senior defense figure.
This disagreement may pave the way for rival AI companies to capitalize on the situation. According to the same official, other firms such as OpenAI’s ChatGPT, Alphabet’s Google Gemini, and xAI’s Grok have collaborated with the Pentagon to guarantee the lawful usability of their platforms.
Last year, Anthropic secured a two-year arrangement with the Pentagon for a prototype of Claude Gov models and Claude for Enterprise.
The outcome of the current negotiations may significantly affect future dealings with OpenAI, Google, and xAI, whose systems have yet to engage in classified Pentagon operations, as reported by Axios, citing unnamed sources.

An OpenAI representative refrained from offering comments, instead directing inquiries to a company blog post regarding its GenAI.mil tool, tailored for the Pentagon and other democratic governments.
Google did not respond to a request for comment, while a spokesperson for X, the parent company of xAI, stated that no immediate announcement was available.
Source link: News.az.






