Sam Altman, CEO of the artificial intelligence startup OpenAI, has always wanted to save the world. Or so a 2016 profile of him in The New Yorker says.
The profile, written by Tad Friend, spans a number of days and quotes a number of people including Peter Thiel to paint a picture of what Jill Filipovic describes as a “young genius who checks all the Silicon Valley boxes,” meaning: white guy, prepper (more later), dropped out of Stanford, and most importantly, extremely confident he is changing the world for the better.
While Filipovic uses that specific description for present-day Altman after the high-speed drama at OpenAI last week, Altman was already that guy in 2016, during Altman’s time at the helm of the startup accelerator Y Combinator, and very early (meaning when Elon Musk was still involved as co-founder) into the life of OpenAI.
The only difference is today, Altman — ah, the nominative determinism — as the leader of the world’s foremost AI company, appears to hold the future of humanity in his hands. (Meanwhile Musk is trying his damndest to prove he’s not antisemitic).
Reuters reports that prior to his abrupt ousting at OpenAI last week, some of the company’s researchers wrote a letter to the board to warn them about a powerful AI discovery that they “could threaten humanity.” And it was that discovery and this letter, which Reuters points out has been previously unreported, that were the “key developments” leading to the board to unceremoniously deciding to kick Altman out.
The news outlet, which cites two people familiar with the matter and says that it did not see the letter, claims that the letter is part of a longer list of problems that the board had with Altman, and among them their concerns “over commercializing advances before understanding the consequences.”
“Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s search for what’s known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks,” Reuters reported.
The board has been dissolved and reconstituted and Altman has been reinstated as CEO, and OpenAI staff have declined to comment about Q*. But Reuters notes that the project’s researchers are very optimistic about its success. (Read about what MIT thinks about it here.)
“Oh, and one odd one—I prep for survival,” Friend quotes Altman talking about his hobbies.
“My problem is that when my friends get drunk they talk about the ways the world will end,” Altman explains, “After a Dutch lab modified the H5N1 bird-flu virus, five years ago, making it super contagious, the chance of a lethal synthetic virus being released in the next twenty years became, well, nonzero. The other most popular scenarios would be AI that attacks us and nations fighting with nukes over scarce resources.” (The Silicon Valley bros saw Covid-19 coming — and the world fighting over resources.)
“I try not to think about it too much,” Altman said. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”
Information for this story was found via The New Yorker, Jill Filipovic, Reuters, and the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.