Mrinank Sharma, who led AI safety research at Anthropic, resigned Monday, warning that “the world is in peril” and that the company struggles to let its stated values govern its actions.
Sharma announced his departure on X, sharing a resignation letter that drew more than 7 million views. The letter describes a world facing multiple crises and warns that technological capability risks outpacing human wisdom.
“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions,” Sharma wrote. “I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most.”
Sharma joined Anthropic in August 2023 after completing his doctorate in machine learning at the University of Oxford. He built and led the Safeguards Research Team, which the company formed to address risks from advanced AI systems.
His work focused on AI sycophancy — when chatbots tell users what they want to hear rather than what is accurate — and defenses against bioterrorism risks from AI. He helped deploy those safeguards into Anthropic’s products and wrote what he described as one of the company’s first safety cases.
The resignation follows a familiar pattern in the AI industry. Last year, Jan Leike left OpenAI after disagreeing with company leadership about priorities, writing that safety had taken “a backseat to shiny products.” Former OpenAI researcher Gretchen Krueger similarly called for improved accountability and transparency when she departed.
We need to do more to improve foundational things like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.
— Gretchen Krueger (@GretchenMarina) May 22, 2024
Sharma’s letter avoids specific accusations but describes witnessing pressure to “set aside what matters most” throughout broader society and within organizations pursuing AI development.
“The world is in peril,” he wrote. “And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.”
He warned that humanity approaches “a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.”
Anthropic, backed by Amazon and Google, markets itself as a leader in safe AI development. The company created the Claude chatbot and CEO Dario Amodei frequently speaks publicly about aligning powerful AI systems with human values. Reports indicate the company seeks a valuation between $285 billion and $350 billion.
Sharma’s departure came days after Anthropic released Claude Opus 4.6, an upgraded model for coding and workplace tasks. Two other researchers, Behnam Neyshabur and Harsh Mehta, also left the company last week.
head of anthropic’s safeguards research just quit and said “the world is in peril” and that he’s moving to the UK to write poetry and “become invisible”. other safety researchers and senior staff left over the last 2 weeks as well… probably nothing. https://t.co/4ses1SEEPw
— Saoud Rizwan (@sdrzn) February 10, 2026
Rather than joining another tech company, Sharma plans to pursue poetry studies and what he calls “courageous speech.” He intends to return to the United Kingdom and focus on facilitation, coaching and community work.
“My intention is to create space to set aside the structures that have held me these past years, and see what might emerge in their absence,” he wrote, adding that he wants to place “poetic truth alongside scientific truth as equally valid ways of knowing.”
Information for this story was found via the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.