11/20/2023: Majority of OpenAI Employees Quit
Sam Altman and Greg Brockman are joining Microsoft
What a crazy weekend!! The OpenAI saga continues. On Saturday evening, there was an OpenAI employee led effort to bring back Sam Altman and Greg Brockman. Apparently, most employees sided with Sam and Greg. I have a friend who worked at OpenAI. He was having such a great time. The team, the work, the impact. It’s all great. This board fight was a big shock and most employees didn’t appreciate this disruption. But the board dug its heels in and eventually hired its new interim CEO: Emmett Shear. Within hours, hundreds of OpenAI employees quit. Sam Altman and Greg Brockman are set to join Microsoft to start a new AI research unit and Microsoft is extending offers to all the current OpenAI employees. As of now, 700 out 770 OpenAI employees quit. It’s unclear how many of the employees will join Microsoft. Another big plot twist is that Ilya Sutskever, co-founder and Chief Scientist of OpenAI who started the board fight, flipped and said he is deeply regretful of *participating* in the board fight. Isn’t he the one who started the fight? I am confused.
At the center of this chaos is the non-profit governing structure. I assume the board is acting out of its fiduciary responsibility to make sure OpenAI’s operations are aligned with their stated mission. The following is the mission section that is listed on OpenAI’s website:
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles:
Broadly distributed benefits
We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.
Long-term safety
We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
Technical leadership
To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities—policy and safety advocacy alone would be insufficient.
We believe that AI will have broad societal impact before AGI, and we’ll strive to lead in those areas that are directly aligned with our mission and expertise.
Cooperative orientation
We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges.
We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.
Then, I suppose it’s fair for the board to be concerned about OpenAI’s plan for hyper growth and the very tight relationship with Microsoft. I suppose the board is worried that OpenAI is moving too fast, too growth and profit driven and is drifting away from its original mission of safe AGI that benefits all humanity. On the other hand, the board members don’t have the vested personal interest. They might be acting out of principle but they could be destroying all the value if they don’t act carefully. I actually do think this non-profit board could be very well intentioned and try to put public interest above all else but it’s a very small board (six members then down to four members) and they don’t exactly have skin in the game. They don’t have the same pain level as the employees and investors when a tough decision like firing its CEO is made. OpenAI is a very important institution. They should have had a bigger and more diverse board. More importantly, I don’t think non-profit is the right structure for OpenAI. A lot is at stake and historically there are horror stories of billion dollar non-profit endowments that were taken over by sociopathic randos. In a way, it’s good the non-profit structure blows up now because this whole thing is basically a skyscraper being built on a shaky foundation. The complex non-profit governance with capped profit where investors and employees have no votes is bound to fail. I also don’t know if the Microsoft AI research unit will work better. It will be completely opposite of what OpenAI nonprofit was trying to accomplish. Corporate overload is profit and shareholder driven. Imagine 5 years from now, some activist investors will be pressuring Microsoft to monetize the Sam and Greg led AI division better and to spend less money on AI safety. The road to hell is paved with good intentions. People might be overly optimistic about the new Microsoft AI research unit. After this weekend, the sad truth is OpenAI cannot be OpenAI any more.
❤️ After the last sleep depraved 72+ hours, the past 4 years of intense working actually become "good old days", I just want to get to back last Friday morning, and crush it with my team with even better work, even better models.
I agree.
"At the center of this chaos is the non-profit governing structure. I assume the board is acting out of its fiduciary responsibility to make sure OpenAI’s operations are aligned with their stated mission."
Geoff Hinton speaks very highly of Ilya. Like founders, doctors and other high performant professionals, academics struggle to understand areas they aren't super focused in. Commercial realities of our world was the unfortunate oversight here.
I'm personally more in the Andrew Ng than Geoff Hinton camp of current state of AI. So much less worried. Will be unhappy to be wrong.