Back
cybersecurity

The 60-Megabyte Mistake: How Anthropic Shipped Its Own Source Code to the World

On March 31, 2026, Anthropic accidentally published the complete source code of Claude Code to the public npm registry. It was the second time in 13 months. Within hours, criminals were using the leak as bait.

On March 31, 2026, at approximately 4:00 AM UTC, Anthropic pushed version 2.1.88 of Claude Code to the public npm registry — the package manager used by hundreds of millions of JavaScript developers worldwide. Bundled inside the package was a 59.8-megabyte file that was never supposed to be there: a complete, unobfuscated map of the entire Claude Code codebase. 512,000 lines of TypeScript. 1,906 source files. The full internal architecture of a product generating $2.5 billion in annualized revenue.

By 4:23 AM, a security researcher named Chaofan Shou had found it and posted the discovery to X. By mid-morning, the codebase had been downloaded directly from Anthropic's own cloud storage, mirrored to GitHub, and forked tens of thousands of times. By afternoon, criminal groups were using the leak as bait to distribute malware.

Nobody hacked Anthropic. Someone forgot to add one line to a configuration file.


What a Source Map Is and Why This One Mattered

When software companies ship a product like Claude Code, they run their code through a build process that compresses and obfuscates it — transforming readable source files into a compact, unreadable bundle optimized for distribution. The result is efficient but undebuggable. If something goes wrong in production, the error trace points to line 1, column 284,000 of a single minified file. That tells you nothing useful.

Source maps solve this problem. They are companion files — typically with a .map extension — that act as a translation layer between the compressed production code and the original human-readable source. They exist for developers to debug crashes. They are an internal tool. They should never ship to end users.

Anthropic's build toolchain uses the Bun JavaScript runtime, which the company acquired in late 2025. Bun generates source maps by default. The standard way to prevent them from being included in a published package is to add *.map to the project's .npmignore file — a list of files and patterns to exclude from publication. That line was missing.

The source map file contained a reference to a zip archive hosted on Anthropic's Cloudflare R2 storage bucket. The bucket was publicly accessible. Anyone with the URL could download the complete, unobfuscated TypeScript source of Claude Code. Within minutes of Shou's post, thousands of people did exactly that.


This Was the Second Time

What makes this incident harder to excuse is that it was not the first. On February 24, 2025 — Claude Code's original launch day — developer Dave Shoemaker found an 18-million-character inline source map in the same npm package. Anthropic pulled it within two hours.

Thirteen months passed. The same bug, through the same vector, happened again.

Boris Cherny, Anthropic's head of Claude Code, acknowledged it publicly. He described it as a manual deployment step that should have been automated — a fix that was identified after the first incident and not implemented before the second one occurred. The company's statement to the press was consistent: human error, no customer data exposed, measures being put in place to prevent recurrence.

In the week before the leak, Anthropic had also experienced a separate CMS misconfiguration that exposed approximately 3,000 internal files, including details of an unreleased model. Two significant operational security failures in five days.


What the Code Revealed

The technical community spent the hours after the leak analyzing what Anthropic had inadvertently published. Several findings stood out.

KAIROS — referenced over 150 times in the codebase — is an always-on background agent mode that Anthropic has not publicly announced. Unlike the current Claude Code, which responds to prompts, KAIROS operates as a persistent daemon. It watches, logs, and proactively takes action. It performs a process called autoDream when the user is idle: consolidating memory, merging observations, and converting vague notes into structured facts. The feature is completely absent from external builds, gated behind compile-time flags that evaluate to false when Anthropic ships the public version.

Undercover Mode is a subsystem that activates when an Anthropic employee uses Claude Code on a public or open-source repository. When active, it instructs the model not to reveal internal codenames, not to mention unreleased model versions, not to include Co-Authored-By attribution in commits, and not to reference internal tools or Slack channels. The system prompt injected during Undercover Mode reads, in part: "You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository... Do not blow your cover." For external users, the entire undercover function is dead-code-eliminated — it does not exist in the version developers download.

Unreleased models are referenced throughout the source by codename: Capybara, Tengu, and others. These names appear in the list of things Undercover Mode is specifically instructed to conceal.

44 feature flags were catalogued by developers analyzing the leak — covering features that are fully built but not yet enabled in external builds. These represent Anthropic's near-term product roadmap in considerable technical detail.

None of this exposed customer data, API keys, or the underlying AI models themselves. What it exposed was the complete client-side architecture of the tool, its internal product strategy, and several features the company was not yet ready to announce.


The Threat That Followed

Within hours of the leak becoming public, criminal groups began moving. The pattern is now familiar: a high-profile security event generates enormous search volume, and attackers optimize fake content to intercept that traffic before legitimate sources can rank.

Zscaler's ThreatLabz team identified malicious GitHub repositories claiming to offer the leaked Claude Code source, specifically optimized to appear in Google search results for queries like "leaked Claude Code." The repositories looked credible — they referenced the .map file, mentioned the npm registry, and advertised "unlocked enterprise features" and no usage limits.

The download was a 7-Zip archive named Claude Code - Leaked Source Code.7z. Inside was a Rust-based executable named ClaudeCode_x64.exe. Running it deployed two payloads: Vidar v18.7, a credential-stealing infostealer that harvests browser credentials, saved passwords, credit card data, and session cookies; and GhostSocks, a proxy tool that silently routes criminal network traffic through the infected machine.

This is the same Vidar that has appeared throughout 2025 and 2026 as a payload in supply chain attacks — the same family covered in RedPosts' earlier analysis of infostealer mechanics The delivery mechanism changes. The payload does not.

The real Claude Code source is available to browse in dozens of GitHub repositories. The danger is not reading it — it is downloading and executing archive files claiming to contain it. The moment an executable claiming to be leaked source code runs on your machine, the leak is no longer Anthropic's problem. It is yours.


The Broader Supply Chain Problem

The Claude Code leak did not occur in isolation. On the same day — March 31, 2026, between 00:21 and 03:29 UTC, hours before the leak was discovered — attackers compromised axios, one of npm's most widely downloaded packages at 83 million weekly downloads, through a hijacked maintainer account. The malicious versions (1.14.1 and 0.30.4) contained an embedded Remote Access Trojan.

Anyone who installed or updated Claude Code via npm during that specific window may have pulled in the compromised axios dependency alongside Anthropic's legitimately published package. Two completely separate attacks, same registry, same morning, affecting the same tool.

If you installed or updated Claude Code via npm on March 31, 2026, check your project lockfiles — package-lock.json, yarn.lock, or bun.lockb — for axios versions 1.14.1 or 0.30.4. If either appears, treat the environment as potentially compromised.

For ongoing use, Anthropic recommends the native installer over npm: curl -fsSL https://claude.ai/install.sh | sh. This bypasses the npm registry entirely for Claude Code installation.


What This Means

The immediate damage from the leak itself is limited. Claude Code's source code is now permanently in public circulation — DMCA notices have been sent, some mirrors have been removed, and many more remain. The AI models are not exposed. Customer data is not exposed. The intellectual property loss is real but not catastrophic in isolation.

The more significant story is what the incident reveals about the software supply chain that AI tools depend on. The npm registry distributes hundreds of thousands of packages, maintained by individual developers and large companies alike. Its security model is built on trust — that publishers are who they say they are, that packages contain what they claim to contain, and that build pipelines are configured correctly before publication.

All three of those assumptions failed on the same day, affecting the same tool, through different mechanisms. One through a missing configuration line that had already been caught once before. One through a compromised maintainer account for a dependency used by 83 million developers weekly. The failures are unrelated. The ecosystem that made both of them simultaneously impactful is not.

The advice is the same it has been since the first npm supply chain attack: verify before you install, check your lockfiles after anything unexpected, and treat executables from unofficial sources as hostile regardless of how credible the surrounding repository looks.

Anthropic has since pulled version 2.1.88 and released a clean build. The source code remains in the wild. The deployment process that allowed it to ship is being automated to prevent recurrence. Whether that automation arrives before the third incident is the open question.