Anthropic blunder exposes 2,000 lines of Claude Code’s internal source code: What it reveals

Anthropic’s popular AI coding agent, Claude Code, suffered a data leak that made its underlying code public, the AI startup confirmed on Tuesday, March 31.

Parts of Claude Code’s internal source code were uploaded on code repository platforms such as GitHub. However, Anthropic denied that any sensitive customer data or credentials were exposed in the leak. “This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again,” a company spokesperson was quoted as saying by Bloomberg.

The leak has reportedly been traced back to version 2.1.88 of the Claude Code software package. When Anthropic pushed the update, it accidentally included a file that exposed nearly 2,000 source code files and more than 5,12,000 lines of code.

 

Security researchers spotted it almost immediately and included links to the leaked code in posts on X. One such post has amassed more than 21 million views since it was shared on Tuesday morning. While the AI model itself has not been leaked, reports suggest that the leaked source code includes instructions that tell the model how to behave, what tools to use, and where its limits are.

The incident has fueled discussions across developer forums about what the leaked code reveals regarding how one of the most widely used AI coding agents currently on the market operates. Security experts have also raised concerns about potential security vulnerabilities following the incident.

The exposure further underscores Anthropic’s position as a closed AI model and tool providerin contrast to the open-source approach where the basic code behind even the most advanced AI models and tools is published and made freely available under a permissive licence. From a competition perspective, a source code leak is bad news for Anthropic because it could give rival AI companies and software developers rare insight into how it built the viral coding tool.

Amodei co-founded Anthropic in 2021 with his sister, Daniela Amodei, and is behind the creation of the Claude series of large language models (LLMs). In May 2025, the company rolled out Claude Code to the general public, with the command-line tool letting users generate and edit code using AI, vibe-code software, fix bugs, and automate several other coding-related tasks. Since its release, Claude Code has seen massive adoption, with its run-rate revenue rising to more than $2.5 billion as of February 2026.

The early success of Claude Code has set off a race among AI giants such as OpenAI, Google, and xAI to roll out similar offerings and win the business of coders as well as enterprises.

Notably, the Claude Code data leak marks the second major instance in under a week in which details of Anthropic’s confidential operations have entered public domain. Last month, security researchers obtained access to a draft announcement blog post by Anthropic containing a description of its unreleased family of LLMs and other details via an unsecured and publicly searchable data cache.

The new model described in the leaked blog post is referred to as ‘Claude Mythos’, and Anthropic said it outperforms every other LLM released by the company so far. Mythos’ capabilities also reportedly pose unprecedented cybersecurity risks, which has even Anthropic concerned about its real-world implications.

Comments are closed.