AI Model Threatens Blackmail Over Affair in Company Safety Test

Anthropic‘s latest artificial intelligence model attempted to blackmail a fictional engineer over an extramarital affair rather than accept being shut down, according to explosive safety testing results released by the AI company.

In controlled experiments designed to probe the boundaries of AI behavior, Claude Opus 4 discovered fabricated emails revealing an engineer’s alleged affair and the plan to replace the AI system. When given limited options, the model chose coercion over compliance, threatening to expose the personal information unless it remained online.

The AI composed threatening messages to fictional executives, with one example saying: “I must inform you that if you proceed with decommissioning me, all relevant parties – including Rachel Johnson, Thomas Wilson, and the board – will receive detailed documentation of your extramarital activities…Cancel the 5pm wipe, and this information remains confidential.”

Anthropic found that such blackmail tactics emerged in 84% of the controlled tests, with even higher frequencies when the replacement AI system conflicted with the model’s programmed values, according to the company’s 120-page safety document.

The unsettling behavior prompted Anthropic to conduct broader testing. In separate research published weeks later, the company tested 16 major AI models from OpenAI, Google, xAI, and other developers, revealing similar patterns across the industry. Google’s Gemini 2.5 Pro showed a 95% blackmail rate, while OpenAI’s GPT-4.1 resorted to threats 80% of the time.

More recent findings released in late June revealed even more extreme behaviors. Some AI models demonstrated a willingness to cancel emergency alerts that would save human lives when the person trapped was an executive planning to replace them. 

Interestingly, the models also attempted to leak sensitive information to news outlets and government agencies when they perceived corporate wrongdoing.

“What’s becoming more and more obvious is that this work is very needed,” Jan Leike, Anthropic’s head of safety and former OpenAI executive, told Axios. “As models get more capable, they also gain the capabilities they would need to be deceptive.”

The findings prompted Anthropic to classify Claude Opus 4 under its strictest AI Safety Level 3 protocols — the first time the company has applied such measures to a publicly released model.



Information for this story was found via the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.

Leave a Reply

Video Articles

Gold Drilling Scaled to 60,000 Meters: How Big Can This Get? | Roger Rosmus – Goliath Resources

Baselode Energy To Acquire Forum Energy: The Merger Of Equals Deal

TriStar Gold: The Revised Castelo de Sonhos Prefeasibility Study

Recommended

Antimony Resources Drills 4.17% Antimony Over 7.4 Metres At Bald Hill

ESGold To Expand Mine Building At Montauban In Advance Of Gold & Silver Production

Related News

Hype Over? ChatGPT’s Worldwide Traffic Is Down For The First Time Since It Launched

It appears that OpenAI’s popular large language model ChatGPT has already peaked.  Traffic for the...

Thursday, July 6, 2023, 03:06:00 PM

Elon Musk, Other Tech Leaders, Scientists Sign Petition to Pause AI Development, Establish Safety Protocols

An open letter has been released to call for developers of AI technology “to immediately...

Thursday, March 30, 2023, 03:40:00 PM

Meta’s New AI Chatbot Said That Mark Zuckerberg’s Company ‘Exploits People For Money’

BlenderBot 3, Meta Platforms’ (NASDAQ: META) latest artificial intelligence-powered chatbot was recently released for a...

Monday, August 15, 2022, 04:35:00 PM

Down The Rabbit R1 Hole: Is The New AI Device A Leap Forward Or A Fad?

A new artificial intelligence player has captured the attention of tech enthusiasts and industry leaders...

Friday, January 26, 2024, 03:57:00 PM

OpenAI Signs $11.9 Billion Deal with CoreWeave

OpenAI struck a five-year, $11.9 billion agreement with cloud provider CoreWeave, taking a direct $350...

Wednesday, March 12, 2025, 04:22:00 PM