US Military Employs Anthropic’s Claude AI in Controversial Operation
The United States military reportedly utilized Claude, the sophisticated AI model developed by Anthropic, in a clandestine operation aimed at the abduction of Nicolás Maduro from Venezuela.
This startling revelation, disclosed by the Wall Street Journal on Saturday, exemplifies the Department of Defense’s increasing reliance on artificial intelligence technology in its strategic maneuvers.
Details surrounding the US raid, which included aerial bombardments over the capital city of Caracas, indicate that the operation resulted in the fatalities of 83 individuals, as per the Venezuelan Defense Ministry’s assertions.
Notably, Anthropic’s stipulations explicitly prohibit the employment of Claude for violent objectives, weapon development, or surveillance activities.
Anthropic has become the inaugural AI entity known to have been engaged in a classified military operation commissioned by the US Defense Department.
However, the specifics regarding the operational deployment of Claude—whose functions span from processing complex documents to piloting autonomous drones—remain ambiguous.
A representative from Anthropic refrained from confirming Claude’s involvement in the operation but reiterated that all utilization of their AI technology must adhere to established usage policies. The Defense Department did not respond to inquiries regarding these developments.
According to unnamed sources cited by the Wall Street Journal, Claude’s integration into the operation occurred through a collaboration with Palantir Technologies, a contractor engaged with the US military and various federal law enforcement agencies. Palantir declined to comment on the matter.
The current trend sees military forces, including the US and others, increasingly incorporating AI into their tactical frameworks.
For instance, Israel’s military has deployed drones equipped with autonomous functionalities in Gaza and has extensively utilized AI to enhance its targeting databases. Similarly, the US military has effectively integrated AI-targeting systems for operations in Iraq and Syria in recent years.
However, critics have voiced significant concerns regarding the implementation of AI in weaponry and autonomous systems, highlighting the potential for computer-induced targeting errors that could lead to catastrophic consequences.
AI firms are grappling with the ethical implications of their technologies in defense contexts. Dario Amodei, CEO of Anthropic, has advocated for regulations to mitigate risks associated with AI’s deployment, particularly in lethal operations and surveillance within the US.
His more judicious approach has seemingly frustrated certain Defense Department officials; Secretary of War Pete Hegseth remarked in January that the Pentagon would not adopt AI models that would hinder military engagement.

In a related announcement earlier this year, the Pentagon revealed plans to collaborate with xAI, a company owned by Elon Musk. Additionally, the Defense Department has been utilizing customized versions of AI systems developed by Google and OpenAI for ongoing research endeavors.
Source link: Theguardian.com.





