OpenAI Board Found Out About the Launch of ChatGPT on Twitter

Former OpenAI board member Helen Toner revealed that the board was not informed about the company’s launch of its chatbot, ChatGPT, in November 2022, she recently revealed in a TED AI podcast. Toner and other board members instead discovered the launch on Twitter, which ultimately led to the firing of Chief Executive Officer Sam Altman later that month.

We all know how that went: the high-speed drama that started with Altman’s sudden dismissal resulted in almost all the employees threatening to quit, and ended with his reinstatement and the departure of Toner and other directors from the board all in a matter of days. 

The reasons behind Altman’s firing have been a topic of speculation in Silicon Valley, with the board initially stating that Altman had not been “consistently candid” in his interactions with directors. Toner also pointed out issues on how he handled safety.

“On multiple occasions, he gave us inaccurate information about the formal safety processes that the company did have in place,” she said, “meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change.”

She also revealed that Altman did not disclose his involvement with OpenAI’s startup fund. The startup fund or the for-profit subsidiary of OpenAI was built to raise the capital needed to support the non-profit that had, as Toner and other former director Tasha McCauley wrote in an op-ed on The Economist last week, “a laudable mission: to ensure that agi, or artificial general intelligence—ai systems that are generally smarter than humans—would benefit ‘all of humanity.’”

“The stated purpose of this unusual structure,” they wrote, pertaining to the non-profit and for-profit subsidiary, “was to protect the company’s ability to stick to its original mission, and the board’s mandate was to uphold that mission. It was unprecedented, but it seemed worth trying. Unfortunately it didn’t work.”

Toner and McCauley revealed that ousting Altman in November was “an effort to salvage this self-regulatory structure.” Toner and McCauley believe that “self-governance cannot reliably withstand the pressure of profit incentives,” and that “for the rise of AI to benefit everyone, governments must begin building effective regulatory frameworks now.”

OpenAI’s current board chief, Bret Taylor, said in a statement to the podcast that they are “disappointed that Ms. Toner continues to revisit these issues,” adding that an independent review of the shortlived ousting “concluded that the prior board’s decision was not based on concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”


Information for this story was found via the sources and companies mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.

Leave a Reply

Share
Tweet
Share
Reddit