As the weekend approached, a turbulent period preceded the United States and Israeli military assaults on Iran. The U.S. Department of Defense found itself deeply engaged in fraught negotiations with the AI enterprise Anthropic concerning the appropriate utilization of the firm’s sophisticated technologies in military scenarios.
Anthropic sought explicit assurances that its Claude systems would not be deployed for domestic surveillance within the United States or for the operation of autonomous weaponry that functions without human oversight.
In a swift response, President Donald Trump issued a directive on Friday mandating all federal agencies to halt utilization of Anthropic’s technologies, proclaiming that he would “never permit a radical left, woke company to dictate how our illustrious military engages in warfare and secures victory!”
Subsequently, rival AI organization OpenAI, known for its ChatGPT platform, revealed it had finalized its own agreement with the Department of Defense. The crux of the divergence appears to be OpenAI’s allowance for “all lawful uses” of its technologies, devoid of any delineated ethical boundaries.
This raises a pivotal inquiry regarding the future of military uses of AI. Does this signify the demise of the aspiration for “ethical AI” in combat?
AI Corporations and Regulatory Landscapes
The preceding week’s developments arrive amidst an already treacherous landscape concerning AI ethics. In the previous year, the Trump administration prohibited states from instituting regulations on AI, arguing that such measures impede innovation.
Simultaneously, numerous AI entities have aligned themselves closely with the administration, with leaders such as OpenAI’s Sam Altman making substantial donations to Trump’s inaugural fund. (Altman noted at that time his financial contributions to Democratic figures as well.)
In contrast, Anthropic has maintained a more measured stance, focusing on national security while expressing concerns that AI can potentially undermine democratic foundations, asserting that the existing systems lack the reliability necessary for the deployment of fully autonomous weaponry.
A Forming International Consensus
Much of the trepidation surrounding military applications of AI has been concentrated on lethal autonomous weapon systems. These are sophisticated devices and algorithms capable of selecting and engaging targets absent human intervention.
Only a few years prior, a budding international consensus regarding the risks associated with these armaments seemed to be coalescing among various governments and technological firms.
In February 2020, the U.S. Department of Defense promulgated principles governing AI use within the organization, stipulating that such technologies must be responsible, equitable, traceable, reliable, and governable.
In a similar vein, NATO articulated analogous principles in 2021, with the United Kingdom following this lead in 2022.
The United States occupies a pivotal leadership position among its allies in shaping global standards regarding military conduct. These principles served as guidance to nations like Russia, China, Brazil, and India regarding the envisioned governance of military AI implementations.
Interplay Between Military AI and Private Sector
Military applications of AI have heavily relied on collaboration with private enterprises, as the most cutting-edge technologies have predominantly emerged from the corporate sector.
Project Maven, initiated in 2017 to enhance machine learning and data integration within U.S. military intelligence, was substantially dependent on commercial technology firms.
The U.S. Defense Innovation Board noted in 2019 that in the realm of AI, the crucial data, expertise, and workforce reside overwhelmingly within the private sector.
This reality endures in contemporary times. Nonetheless, the prevailing norms surrounding AI deployment are shifting rapidly, both within governmental frameworks and across much of the industry.
Trump’s Influence on Silicon Valley
Following Trump’s re-election in 2024, numerous figures in Silicon Valley embraced the anticipation of diminished regulatory oversight. Billionaire venture capitalist Marc Andreessen, author of The Techno-Optimist Manifesto, expressed that Trump’s victory “felt like a boot off the throat”.
Joe Lonsdale, co-founder of the AI-driven data analytics firm Palantir, has emerged as a vocal proponent of Trump. Additionally, OpenAI president and co-founder Greg Brockman contributed US$25 million to a pro-Trump organization last year.
We have evidently traversed a considerable distance from the circumstances of 2019 and 2020.
Ethics of AI Within Democratic Frameworks
The ethical evaluation of AI systems is frequently perceived as a question focused on the technology itself, rather than its applications.
This perspective posits that with astute design, one can establish an inherently ethical AI system. Central to this notion is “algorithmic transparency”—providing clarity regarding the rules employed by the system to make decisions. Essentially, the belief is that ethics can be “integrated” into these governing rules.
The concept of ethical military AI presupposes its operation within democratic structures. The rationale behind algorithmic transparency is that the populace should be informed of how these systems function, as “the people” ultimately possess power in a democracy.
However, within an autocratic regime, algorithmic transparency becomes inconsequential. There exists no expectation that civilians have a vested interest and merit awareness of governmental actions, including whether those actions adhere to legal frameworks.
Open and public discourse is often regarded as a fundamental characteristic of liberal democracies. While a consensus may be aspired to, constructive dissent and even conflict can serve as indicators of a thriving democracy.
Decisions and Their Ramifications
In this context, Anthropic’s initiative to engage in substantial dialogue with the government regarding ethical boundaries exemplifies democratic practices in motion. The firm expressed both a commitment to rational discourse and the importance of constructive opposition.
In retaliation, the Trump administration on Friday designated Anthropic as a “supply chain risk,” a classification previously reserved for foreign entities. Secretary of Defense Pete Hegseth remarked that:
effective immediately, no contractor, supplier, or partner conducting business with the United States military may engage in any commercial activities with Anthropic.
Anthropic intends to contest this declaration in court, as it could result in severe economic and reputational repercussions for the company.
Conversely, OpenAI appears to have acquiesced to the absence of ethical constraints, recognizing only legal limitations. Consequently, it is poised to collaborate with the U.S. government—though it too faces reputational challenges as consumer discontent rises.
AI in a Landscape Devoid of Democratic Norms

What implications does this carry for the ethical deployment of AI in the military? An unavoidable conclusion is that if we desire military AI to be harnessed ethically—operating under transparent regulations and legalities—we require robust democratic standards, which currently seem jeopardized as the rules-based international framework disintegrates.
Thus far, tangible change remains elusive. Merely hours after Trump’s denunciation of Anthropic, the United States initiated strikes on Iran—reportedly planned with assistance from the company’s technologies.
Source link: Theconversation.com.






