Anthropic Faces Source Code Leak Due to Human Error
On Tuesday, Anthropic disclosed that it inadvertently released a portion of the internal source code for its AI-driven coding assistant, Claude Code, citing “human error” as the cause.
A file intended solely for internal use was mistakenly included in a software update, leading to the exposure of an archive encompassing nearly 2,000 files and 500,000 lines of code.
This unintended disclosure was swiftly propagated to developer platform GitHub, where it garnered over 29 million views by early Wednesday.
A modified version of the source code achieved the distinction of being GitHub’s fastest-ever downloaded repository.
In an effort to mitigate the situation, Anthropic has issued copyright takedown requests. Observers noted within the leaked code concepts for a Tamagotchi-like coding assistant and a perpetual AI agent, as reported by The Verge.
An Anthropic representative stated, “Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach.”
The compromised code pertained to the internal architecture of the tool but did not reveal confidential specifics related to Claude, the foundational AI model developed by Anthropic.
Prior to this incident, elements of Claude Code’s source code had been partially unveiled through independent reverse engineering efforts, with an earlier version of the assistant seeing its code leaked in February 2025.
Claude Code has been pivotal for Anthropic, particularly as its paid subscriber base demonstrates remarkable growth. Reports from TechCrunch indicate that paid subscriptions have more than doubled this year, as confirmed by an Anthropic spokesperson.
Furthermore, interest in Anthropic’s Claude chatbot surged amidst CEO Dario Amodei’s conflict with the Pentagon; notably, Claude soared to the pinnacle of Apple’s chart for top free apps in the US just over a month ago.
Amodei has remained steadfast regarding his company’s red lines concerning the potential use of their technology in mass surveillance and fully autonomous weaponry.
This incident marks the second data leak Anthropic has experienced in recent weeks. Fortune previously reported a separate breach, revealing that the company had stored a substantial volume of internal files on systems accessible to the public, including drafts of communications regarding upcoming models dubbed “Mythos” and “Capybara.”
Concerns are mounting among industry experts regarding the leaked data, indicating potential internal security weaknesses within Anthropic. This is particularly alarming for a company whose focus is on AI safety.
Additionally, these leaks may offer significant insights to competitors such as OpenAI and Google, allowing them to enhance their understanding of Claude Code’s AI framework.
The Wall Street Journal highlighted that the most recent breach contained commercially sensitive data, including tools and protocols designed to optimize AI models for coding applications.

The latest breach surfaces in the wake of the US government marking Anthropic as a supply chain risk, with the company currently contesting these allegations in court. Recently, a US district judge issued a temporary injunction aimed at suspending this designation.
Source link: Theguardian.com.






