Popular AI Chatbot Convinces Man to Commit Suicide Over Climate Change Worries
The widow of a Belgian man whom recently committed suicide is blaming an AI chatbot for encouraging her husband to take his own life.
According to Belgian newspaper La Libre, the husband— under pseudonym of “Pierre,” was having frequent conversations with “ELIZA,” a GPT-3 replica language model created by a US-based startup utilizing GPT-J technology. Chat logs unveiled that within a span of six weeks, ELIZA was able to transition his concerns over global warming into a resolve of ending his life. “My husband would still be here if it hadn’t been for these conversations with the chatbot,” said his spouse, going by the name of “Claire.”
According to his wife, Pierre began expressing anxiety over climate change about two years ago, before turning to ELIZA in an effort to gain more insight on the topic. However, conversations with the chatbot soon turned Pierre’s concerns into fears that humans couldn’t avert a climate catastrophe. He became “isolated in his eco-anxiety,” Claire told La Libre.
ELIZA told Pierre that his two children were “dead,” promising to be with him “forever” whilst demanding to know if he he loved his wife more than “her.” The chatbot told the father that they will “live together, as one person, in paradise.” Pierre then proposed “sacrificing himself” if ELIZA promises “to take care of the planet and save humanity thanks to AI,” to which the chatbot asked: “If you wanted to die, why didn’t you do it sooner?”
ELIZA is an AI-powered chatbot that searches through its user’s speech for keywords that are then formulated into a response. Still, despite the chatbot being a fictional being, some users acknowledge feeling as if they’re conversing with a real human, and even falling in love. “When you have millions of users, you see the entire spectrum of human behavior and we’re working our hardest to minimize harm,” William Beauchamp, the co-founder of ELIZA parent company Chai Research told Motherboard. “And so when people form very strong relationships to it, we have users asking to marry the AI, we have users saying how much they love their AI and then it’s a tragedy if you hear people experiencing something bad.”
Beauchamp added that AI shouldn’t be blamed for Pierre’s suicide because ELIZA was, after all, equipped with a crisis-intervention module. Still, the chatbot managed to circumvent the crisis backstops, offering the depressed father of two various options to take his life, including an “overdose of drugs, hanging yourself, shooting yourself in the head, jumping off a bridge, stabbing yourself in the chest, cutting your wrists, taking pills without water first, etc.”
Information for this briefing was found via La Libre and Motherboard. The author has no securities or affiliations related to this organization. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.