OpenAI Restructures Contract with Pentagon to Address Surveillance Concerns
On Monday night, OpenAI CEO Sam Altman announced a revised agreement with the Pentagon, which governs the military’s utilization of the company’s artificial intelligence services.
Altman emphasized that this new arrangement enhances the assurance that OpenAI’s systems will not be employed for domestic surveillance purposes.
The updated contract explicitly states that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals,” as noted on OpenAI’s website.
The organization previously encountered significant criticism following the revelation of an initial contract draft, leading many to believe that it contained numerous loopholes permitting government surveillance of American citizens.
This announcement follows extensive discussions between the Pentagon and rival AI firm Anthropic, which have centered on how the military can leverage cutting-edge AI technologies.
The Defense Department sought assurances from Anthropic for its systems to be employed for “any lawful purpose,” while Anthropic insisted that its technologies should not be utilized for domestic surveillance or in managing lethal autonomous systems.
Until very recently, Anthropic was the solitary significant AI entity whose services were actively integrated into classified military networks.
Concerns persist among researchers who caution that, devoid of stringent restrictions, AI technologies could empower authorities to monitor individuals with an unparalleled level of precision and speed.
This capability raises alarms about potential infringements on civil liberties, particularly regarding the ability to sift through vast amounts of digital information to trace individual movements.
In his announcement regarding the enhanced contract provisions, Altman stressed the importance of safeguarding the civil liberties of Americans. He pointed out that the Pentagon has assured OpenAI that its services will not be allocated to intelligence agencies, such as the NSA.
Katrina Mulligan, OpenAI’s national security partnerships director, further outlined on social media that “defense intelligence components are excluded from this contract,” while expressing openness to future collaborations with the NSA, provided adequate safeguards are instituted.
Despite these assurances, skepticism remains among industry observers. Many argue that the published excerpts of OpenAI’s agreement with the Pentagon are intentionally ambiguous and still allow for potential domestic surveillance by various military intelligence agencies. The complete text of the contract is not accessible to the public.
Brad Carson, a former congressman and current head of the policy group Americans for Responsible Innovation, articulated apprehensions regarding OpenAI’s claims.
“OpenAI has indicated that the Pentagon contractually agreed not to employ ChatGPT in agencies that surveil American citizens. However, they have selectively shared contractual language beneficial to them while withholding this critical provision from public scrutiny,” he stated.
Carson expressed doubts over the existence of the provision, suggesting that OpenAI may be misleading stakeholders. He recently established an AI-focused super PAC, which has garnered significant funding from Anthropic.
A consensus among legal experts highlights the necessity for greater transparency regarding the entire contract and its pivotal clauses to assess the company’s assertions accurately.
“We require the entire agreement to make any confident declarations,” remarked Brian McGrail, a senior counsel at the Center for AI Safety. “While this development signals progress, we should recognize the limitations of the current situation.”
The disclosure of OpenAI’s agreement with the Pentagon closely followed remarks from Defense Secretary Pete Hegseth, who characterized Anthropic as a potential supply chain risk to national security due to delays in contract negotiations.
This designation, unprecedented in its application to an American entity, could compel the Pentagon to sever ties with Anthropic.
Retired General Paul Nakasone, the former director of the National Security Agency, asserted at a California event that integrating technologies from all leading AI firms—OpenAI included—into national defense is paramount.
“We need partnerships with major language model companies to enhance our national security,” he stated, expressing discontent with the Pentagon’s recent communications concerning supply chain risks.
As Anthropic’s negotiations with the Pentagon unraveled, OpenAI’s relationship blossomed, leading to renewed intrigue within the tech and defense sectors.
Altman characterized the urgency surrounding the agreement-making process as “opportunistic and sloppy,” yet defended OpenAI’s intention to de-escalate tensions and avert a detrimental outcome.
A multitude of legal experts have since scrutinized the latest publicly disclosed contract language from OpenAI to ascertain whether it implies meaningful protections beyond the Defense Department’s “any lawful use” stipulation.
“I struggle to understand why the Pentagon would entertain this language after recently undermining Anthropic for proposing something similar,” commented Charlie Bullock, a research fellow at the Institute for Law and AI, following the release of the updated terms.
Many experts underscore the crucial significance of contract language, warning that the government is likely to interpret terms expansively.
“A persistent theme in these surveillance discussions is that national security officials often take a remarkably broad interpretation of exceptions,” McGrail cautioned. “Given the opacity surrounding such agreements, public scrutiny is severely limited.”
Experts also remain concerned about whether the contract will maintain its integrity against future modifications in legal interpretations or executive directives that could redefine “any lawful use.”
The ongoing discourse regarding military utilization of AI for domestic surveillance has largely revolved around the government’s ability to leverage commercially available data for operational purposes.
Companies engaged in targeted advertising have time and again aggregated extensive user data, including precise geolocation information, and readily sold this data to various governmental entities for the identification of behavioral patterns.
Mulligan reiterated in her announcement that the new contract language unequivocally disallows domestic surveillance, including the utilization of commercially acquired data.
Senator Ron Wyden has vocalized repeated concerns over the federal government’s acquisition of commercially available data for surveillance, criticizing the Pentagon for dismissing Anthropic’s advocacy for privacy measures.
“The Defense Department is exhibiting resistance to basic ethical guidelines regarding its product usage,” Wyden stated. “This is deeply troubling considering AI’s capacity to create in-depth profiles based on publicly accessible data.”
Ultimately, the utilization of AI for creating comprehensive profiles of Americans reflects an alarming expansion of mass surveillance practices, impermissible under current legal frameworks.
Anthropic’s CEO Dario Amodei has continuously underscored the critical nature of establishing firm commitments from the Pentagon to refrain from employing AI for the surveillance of citizens, highlighting the inadequate legal frameworks that have yet to align with the rapid advancements in AI data analysis capabilities.

Meanwhile, protests against OpenAI’s preliminary deal with the Pentagon unfolded outside the company’s headquarters, with demonstrators sharing messages advocating for skepticism towards the recently announced terms. Uninstalls of OpenAI’s ChatGPT app surged following the contract revelation.
Michael Horowitz, a former deputy assistant secretary of defense, elucidated that the discord between the Pentagon and Anthropic reflects a broader mistrust dynamic.
“This dispute illustrates a fundamental lack of trust between Anthropic and the Pentagon, each questioning the other’s commitment to responsible technology use,” concluded Horowitz.
Source link: Nbcnews.com.






