Anthropic Safety Head Abandons Tech for Poetry, Warns of Global Crises

Mrinank Sharma, who led AI safety research at Anthropic, resigned Monday, warning that “the world is in peril” and that the company struggles to let its stated values govern its actions.

Sharma announced his departure on X, sharing a resignation letter that drew more than 7 million views. The letter describes a world facing multiple crises and warns that technological capability risks outpacing human wisdom.

“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” Sharma wrote. “I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most.”

Sharma joined Anthropic in August 2023 after completing his doctorate in machine learning at the University of Oxford. He built and led the Safeguards Research Team, which the company formed to address risks from advanced AI systems.

His work focused on AI sycophancy — when chatbots tell users what they want to hear rather than what is accurate — and defenses against bioterrorism risks from AI. He helped deploy those safeguards into Anthropic’s products and wrote what he described as one of the company’s first safety cases.

The resignation follows a familiar pattern in the AI industry. Last year, Jan Leike left OpenAI after disagreeing with company leadership about priorities, writing that safety had taken “a backseat to shiny products.” Former OpenAI researcher Gretchen Krueger similarly called for improved accountability and transparency when she departed.

Sharma’s letter avoids specific accusations but describes witnessing pressure to “set aside what matters most” throughout broader society and within organizations pursuing AI development.

“The world is in peril,” he wrote. “And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.”

He warned that humanity approaches “a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.”

Anthropic, backed by Amazon and Google, markets itself as a leader in safe AI development. The company created the Claude chatbot and CEO Dario Amodei frequently speaks publicly about aligning powerful AI systems with human values. Reports indicate the company seeks a valuation between $285 billion and $350 billion.

Sharma’s departure came days after Anthropic released Claude Opus 4.6, an upgraded model for coding and workplace tasks. Two other researchers, Behnam Neyshabur and Harsh Mehta, also left the company last week.

Rather than joining another tech company, Sharma plans to pursue poetry studies and what he calls “courageous speech.” He intends to return to the United Kingdom and focus on facilitation, coaching and community work.

“My intention is to create space to set aside the structures that have held me these past years, and see what might emerge in their absence,” he wrote, adding that he wants to place “poetic truth alongside scientific truth as equally valid ways of knowing.”



Information for this story was found via the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.

Leave a Reply

Video Articles

Is This the Most Overlooked Critical Mineral? (+1000% Move) | Guy Bourassa – Scandium Canada

Is Gold Entering a New 15-Year Cycle? | Rob Husband

A 100,000 Ounce Per Year Gold Plan in Utah | Scott Trebilcock — Revival Gold

Recommended

Silver47 Launches 7,000-Meter Hughes Drill Program In Nevada

Advanced Gold Acquires Nevada Property With Historic Production At 1,611 g/t Silver

Related News

OpenAI Introduces New Level of AI, FTC Moves To Penalize Use Of AI For Impersonation

OpenAI will soon roll out its text-to-video artificial intelligence model, taking AI up to a...

Friday, February 16, 2024, 11:37:00 AM

Are Humans Doomed to Destroy ChatGPT?

Humanity has always been afraid of artificial intelligence. We recognize its ability to fundamentally transform...

Monday, February 6, 2023, 02:17:00 PM

Sports Illustrated Published Content from Fake Authors with AI-Generated Profile Photos and Bios

Sports Illustrated has removed several articles from its website following a story from Futurism that...

Tuesday, November 28, 2023, 11:43:00 AM

HSBC Warns OpenAI Needs $207B More by 2030 Even With $200B-Plus Revenue Forecast

HSBC says OpenAI could need more than $200 billion in additional funding by 2030 to...

Thursday, November 27, 2025, 04:10:00 PM

You Might Be Inadvertently Broadcasting Personal Information on the Meta AI App

Users of Meta‘s (Nasdaq: META) standalone artificial intelligence app are inadvertently broadcasting personal conversations with...

Saturday, June 28, 2025, 07:43:00 AM