Anthropic has unveiled a groundbreaking yet tightly guarded A.I. model, Claude Mythos Preview, which the company claims is too powerful for public release due to its ability to detect and exploit critical software vulnerabilities. Announced on April 7, 2026, this model has already identified thousands of bugs, including a 27-year-old flaw in OpenBSD, an operating system integral to internet routers and secure firewalls.
Instead of a broad rollout, Anthropic is restricting access to a consortium of over 40 technology giants, including Apple, Amazon, and Microsoft, under an initiative dubbed Project Glasswing. The coalition, which also counts competitors like Google and hardware firms such as Cisco among its members, aims to patch security gaps in critical software before malicious actors can exploit them. Anthropic is backing the effort with up to $100 million in Claude usage credits.
The model’s capabilities are staggering. It can autonomously conduct security research, spotting zero-day vulnerabilities—flaws unknown even to developers—and crafting sophisticated exploits. One notable discovery was a longstanding issue in popular video software that evaded detection despite five million scans by automated tools.
“This model has found vulnerabilities missed by decades of security researchers and all automated tools designed to catch them,” said Logan Graham, head of Anthropic’s team for testing dangerous capabilities.
Anthropic is withholding its new Claude Mythos model from public release, instead partnering with 40 companies through Project Glasswing to give cybersecurity defenders early access to strengthen defenses.
— The Dive Feed (@TheDeepDiveFeed) April 7, 2026
Cybersecurity experts with early access echo the urgency. Elia Zaitsev, chief technology officer at CrowdStrike, noted that what once took months for defenders now unfolds in minutes with A.I. like Claude Mythos Preview. The model’s prowess isn’t a specialized feature but a byproduct of its enhanced coding abilities, which also fueled Anthropic’s revenue tripling to over $30 billion in 2026, largely driven by Claude’s popularity among programmers.
Jared Kaplan, Anthropic’s chief science officer, framed the development as a wake-up call. He warned that similar capabilities will soon emerge in other A.I. models, intensifying the arms race between hackers and defenders. Many critical systems worldwide, from physical infrastructure to personal data safeguards, rely on outdated code that may no longer be secure against such advanced tools.
The initiative’s name, Project Glasswing, draws from the glasswing butterfly’s transparent wings—a metaphor for vulnerabilities hiding in plain sight within complex systems. Anthropic’s move to limit access mirrors past caution, such as OpenAI’s delayed release of GPT-2 in 2019 over misinformation risks. Today, the stakes are higher, with potential implications for global tech infrastructure.
Anthropic’s balancing act—pushing A.I. innovation while flagging its dangers—comes after a public clash with the Pentagon over technology use restrictions, though a federal judge blocked a supply-chain risk designation. The company’s focus now is clear: fortify defenses before the next wave of A.I.-enabled threats emerges.
By year-end 2026, Anthropic expects Project Glasswing to have scanned and addressed vulnerabilities in at least 80% of the critical open-source software maintained by partners like the Linux Foundation.
Information for this story was found via the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.