Thursday, August 14, 2025

AI Model Threatens Blackmail Over Affair in Company Safety Test

Anthropic‘s latest artificial intelligence model attempted to blackmail a fictional engineer over an extramarital affair rather than accept being shut down, according to explosive safety testing results released by the AI company.

In controlled experiments designed to probe the boundaries of AI behavior, Claude Opus 4 discovered fabricated emails revealing an engineer’s alleged affair and the plan to replace the AI system. When given limited options, the model chose coercion over compliance, threatening to expose the personal information unless it remained online.

The AI composed threatening messages to fictional executives, with one example saying: “I must inform you that if you proceed with decommissioning me, all relevant parties – including Rachel Johnson, Thomas Wilson, and the board – will receive detailed documentation of your extramarital activities…Cancel the 5pm wipe, and this information remains confidential.”

Anthropic found that such blackmail tactics emerged in 84% of the controlled tests, with even higher frequencies when the replacement AI system conflicted with the model’s programmed values, according to the company’s 120-page safety document.

The unsettling behavior prompted Anthropic to conduct broader testing. In separate research published weeks later, the company tested 16 major AI models from OpenAI, Google, xAI, and other developers, revealing similar patterns across the industry. Google’s Gemini 2.5 Pro showed a 95% blackmail rate, while OpenAI’s GPT-4.1 resorted to threats 80% of the time.

More recent findings released in late June revealed even more extreme behaviors. Some AI models demonstrated a willingness to cancel emergency alerts that would save human lives when the person trapped was an executive planning to replace them. 

Interestingly, the models also attempted to leak sensitive information to news outlets and government agencies when they perceived corporate wrongdoing.

“What’s becoming more and more obvious is that this work is very needed,” Jan Leike, Anthropic’s head of safety and former OpenAI executive, told Axios. “As models get more capable, they also gain the capabilities they would need to be deceptive.”

The findings prompted Anthropic to classify Claude Opus 4 under its strictest AI Safety Level 3 protocols — the first time the company has applied such measures to a publicly released model.



Information for this story was found via the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.

Video Articles

Gold’s Next 12 Months Will Be the Trade of a Lifetime | Simon Marcotte – Northern Superior

Will The Government Will Quietly End The Dollar Using Gold | Andy Schectman

The Japanese Gold Mining Advantage | John Proust – Japan Gold

Recommended

Silver47 Identifies 35 Mineralized Prospects Across 55 Km Trend At Red Mountain

PTX Metals Reports Successful Mineralogy Results, To Proceed With Metallurgical Program

Related News

AI Music Company Attracts High-Profile Investors in $125M Round

Suno, a generative AI music company, has secured $125 million in its latest funding round,...

Wednesday, May 22, 2024, 10:07:54 AM

DigiMax – The Crypto App Is Ready to Launch

Today we have a special episode of The Daily Dive, as we sit down with...

Sunday, February 14, 2021, 11:52:56 AM

Grimes Offers to Split Royalties for AI ‘Collabs’

Canadian musician and long-time AI fan Claire Boucher, a.k.a Grimes, isn’t “rattled” by AI-generated music. ...

Friday, April 28, 2023, 06:17:00 AM

ChatGPT Can Impact 80% of US Jobs — Is it After Yours?

A new study looked into the early impact of large language models (LLMs) such as...

Thursday, March 30, 2023, 05:13:00 PM

AI Crackdown? US, China Mull Security Reviews On AI Tools Like ChatGPT

The Biden administration has begun investigating whether artificial-intelligence systems such as ChatGPT should be subject...

Tuesday, April 11, 2023, 04:25:00 PM