Anthropic leaked 512,000 lines of proprietary source code and details of an unannounced AI model in back-to-back incidents this week — its second such packaging failure in 13 months.
On March 26, Anthropic had left nearly 3,000 documents — including unpublished blog drafts, images, and PDFs — in a publicly searchable data cache tied to its content management system, accessible without a login. Among them was a draft announcement for a new model called Claude Mythos, internally codenamed Capybara.
Anthropic accidentally leaked their entire source code yesterday. What happened next is one of the most insane stories in tech history.
— Jeremy (@Jeremybtc) March 31, 2026
> Anthropic pushed a software update for Claude Code at 4AM.
> A debugging file was accidentally bundled inside it.
> That file contained… pic.twitter.com/4C4q3b6sOb
Anthropic confirmed the model’s existence after the fact, describing it as “the most capable we’ve built to date” and noting it poses unprecedented cybersecurity risks. The company attributed the exposure to a CMS configuration error and pulled public access after being contacted by reporters.
Five days later, a packaging mistake pushed the complete source code of Claude Code — Anthropic’s flagship AI coding assistant — to the public npm registry. Version 2.1.88 of the package shipped with a 59.8MB JavaScript source map file that pointed directly to a downloadable zip archive of the company’s full TypeScript codebase on its own Cloudflare R2 storage — no authentication required.
Security researcher Chaofan Shou caught the file at 4:23 a.m. ET on March 31 and posted the download link on X. Within hours, developers had mirrored the codebase — 1,906 files and roughly 512,000 lines of code — across GitHub. Anthropic filed DMCA takedowns, but the code had already spread too widely to contain.
The exposed source revealed 44 built-but-unshipped feature flags, including KAIROS — an autonomous background mode that lets Claude Code keep working and consolidating memory while a user is idle — and an “Undercover Mode” that strips AI attribution from public git commits.
Internal model names and performance benchmarks also surfaced, identifying Capybara as a Claude 4.6 variant, Fennec as Opus 4.6, and Numbat as an unreleased model in pre-launch testing.
Separately, a supply chain attack on the axios npm package planted malicious versions containing a Remote Access Trojan in the same early-morning window. Developers who updated Claude Code via npm on March 31 between 12:21 a.m. and 3:29 a.m. UTC should check their lock files for axios versions 1.14.1 or 0.30.4, rotate all credentials, and treat affected machines as compromised. Anthropic now recommends its native installer over npm.
Anthropic reported an annualized revenue run rate of roughly $19 billion as of March 2026. Claude Code alone accounts for an estimated $2.5 billion in annualized recurring revenue — more than doubling since the start of the year — with enterprise clients driving about 80% of the total.
No model weights or customer data were exposed, but competitors now have a detailed account of how Anthropic built and plans to extend its most commercially successful product.
But this was not the first time either. A nearly identical source map error exposed an earlier Claude Code version in February 2025, making this the second such packaging failure in 13 months.
Anthropic said in a statement: “This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”
Information for this story was found via the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.