Anthropic Suffers Two Data Exposures in Five Days, Revealing Unreleased Model and Full Source Code

Anthropic leaked 512,000 lines of proprietary source code and details of an unannounced AI model in back-to-back incidents this week — its second such packaging failure in 13 months.

On March 26, Anthropic had left nearly 3,000 documents — including unpublished blog drafts, images, and PDFs — in a publicly searchable data cache tied to its content management system, accessible without a login. Among them was a draft announcement for a new model called Claude Mythos, internally codenamed Capybara. 

Anthropic confirmed the model’s existence after the fact, describing it as “the most capable we’ve built to date” and noting it poses unprecedented cybersecurity risks. The company attributed the exposure to a CMS configuration error and pulled public access after being contacted by reporters.

Five days later, a packaging mistake pushed the complete source code of Claude Code — Anthropic’s flagship AI coding assistant — to the public npm registry. Version 2.1.88 of the package shipped with a 59.8MB JavaScript source map file that pointed directly to a downloadable zip archive of the company’s full TypeScript codebase on its own Cloudflare R2 storage — no authentication required.

Security researcher Chaofan Shou caught the file at 4:23 a.m. ET on March 31 and posted the download link on X. Within hours, developers had mirrored the codebase — 1,906 files and roughly 512,000 lines of code — across GitHub. Anthropic filed DMCA takedowns, but the code had already spread too widely to contain.

The exposed source revealed 44 built-but-unshipped feature flags, including KAIROS — an autonomous background mode that lets Claude Code keep working and consolidating memory while a user is idle — and an “Undercover Mode” that strips AI attribution from public git commits. 

Internal model names and performance benchmarks also surfaced, identifying Capybara as a Claude 4.6 variant, Fennec as Opus 4.6, and Numbat as an unreleased model in pre-launch testing.

Separately, a supply chain attack on the axios npm package planted malicious versions containing a Remote Access Trojan in the same early-morning window. Developers who updated Claude Code via npm on March 31 between 12:21 a.m. and 3:29 a.m. UTC should check their lock files for axios versions 1.14.1 or 0.30.4, rotate all credentials, and treat affected machines as compromised. Anthropic now recommends its native installer over npm.

Anthropic reported an annualized revenue run rate of roughly $19 billion as of March 2026. Claude Code alone accounts for an estimated $2.5 billion in annualized recurring revenue — more than doubling since the start of the year — with enterprise clients driving about 80% of the total. 

No model weights or customer data were exposed, but competitors now have a detailed account of how Anthropic built and plans to extend its most commercially successful product.

But this was not the first time either. A nearly identical source map error exposed an earlier Claude Code version in February 2025, making this the second such packaging failure in 13 months.

Anthropic said in a statement: “This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”



Information for this story was found via the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.

Leave a Reply

Video Articles

Why the Market May Be Misreading Iran | David Woo

Why US Fertilizer Supply Could Matter a Lot More Now | Pat Varas – Sage Potash

Roscan Gold: Mali Discount Hits Kandiole PEA

Recommended

Antimony Resources Expands New Discovery Following Trenching

Silver47 Kicks Off 7,000-Meter Drill Campaign at Nevada’s Hughes Project

Related News

Music Publishers File Preliminary Injunction Against Anthropic, Seeking to Halt ‘Improper Use of Copyrighted Works’

Three major music publishers – Universal Music, Concord Music Group, and ABKCO Music – have...

Tuesday, November 21, 2023, 03:44:00 PM

The Pentagon Banned Anthropic — Then Gave OpenAI the Same Deal

President Donald Trump ordered every federal agency to cut ties with Anthropic on Friday, and...

Monday, March 2, 2026, 11:07:00 AM

AI Model Threatens Blackmail Over Affair in Company Safety Test

Anthropic‘s latest artificial intelligence model attempted to blackmail a fictional engineer over an extramarital affair...

Friday, July 4, 2025, 11:05:00 AM

Anthropic Launches Latest Large Language Model, Touted As Smarter Than ChatGPT

AI company Anthropic has announced the release of its latest large language model, Claude 3...

Thursday, March 14, 2024, 03:45:00 PM

Anthropic Safety Head Abandons Tech for Poetry, Warns of Global Crises

Mrinank Sharma, who led AI safety research at Anthropic, resigned Monday, warning that “the world...

Wednesday, February 11, 2026, 12:54:00 PM