"The Reverse Exodus: Why Can't OpenAI Retain Its Top Talents?"

08/08 2024 408

Editor: Baishi

"Sam Altman once candidly shared with The New Yorker, 'Elon Musk is desperate to save the world, but only if he can be the hero who saves it.'"

Regarding Altman himself, I have an interesting analogy: He doesn't care if the world ends, but he must be on the Noah's Ark, and preferably as the captain.

Unsurprisingly, OpenAI has once again experienced a personnel shakeup.

"ChatGPT's architect" and co-founder John Schulman officially joined Anthropic; Peter Deng, the newly appointed VP of Product, also announced his departure; shockingly, Altman's staunch ally, Greg Brockman, OpenAI's President and co-founder, is rumored to have left as well.

If previous exoduses were Altman's way of purging dissidents, then Greg Brockman's "extended leave" (with no response from Altman) might signal the beginning of the new king's isolation.

At this point, only two of OpenAI's original 11 founders remain: Sam Altman and Wojciech Zaremba, the head of the Language and Code Generation department and protégé of Yann LeCun. Jimmy Apples, a prominent OpenAI whistleblower, tweeted: "The top brass is now a hollow shell, bring on the new blood!"

Meanwhile, Elon Musk, one of OpenAI's founders and Altman's former "mentor," has once again sued Altman and Brockman, citing "fraud and extortion" during OpenAI's inception.

Let's delve into the history of "OpenAI's betrayal" and explore why Sam Altman struggles to retain his top talents.

"Pre-coronation" Prelude: It's About Security, But Not Entirely There was once the PayPal Mafia, and now there are the OpenAI traitors. Statistics show that nearly 75 key employees have left OpenAI to found roughly 30 AI companies.

Dario and Daniela Amodei, siblings and former VP of Research and VP of Security & Policy respectively, founded Anthropic, valued at $18 billion;

Ilya Sutskever, former Chief Scientist, founded SSI, valued at $10 billion;

David Luan, former VP of Engineering, founded Adept AI (acquired by Amazon), valued at over $1 billion;

Jonas Schneider, former Head of Technology, founded Daedalus, a robotics startup, valued at $40 million;

Aravind Srinivas, former Research Scientist, founded Perplexity.AI, valued at $3 billion;

Tim Shi, a former technologist, founded Cresta AI, an AI customer service platform, valued at $1.6 billion;

Among them, Anthropic has emerged as OpenAI's primary competitor and a haven for former employees, while Perplexity.AI challenges Google's search engine capabilities.

For average researchers, leaving is about pursuing better opportunities. But for core members, especially those in the founding team, it's often due to fundamental disagreements. Typical examples include Elon Musk, the Amodei siblings, and Ilya Sutskever. Their clashes with Altman only solidified Altman's position at OpenAI.

Step 1: Overthrowing Elon Musk's "Tyranny"

In 2015, Google acquired DeepMind. Musk teamed up with Luke Nosek, PayPal co-founder and Founders Fund creator, to propose a rival bid, which ultimately failed. This became a lingering regret for Musk. Against this backdrop, a concerned Musk attended a dinner that went down in Silicon Valley history. Among the ten notable attendees were three standouts: Altman, Ilya Sutskever, and Greg Brockman. They discussed the potential AI apocalypse and the requirements for a project that could rival Google's. The four believed they had all the ingredients for success: Ilya Sutskever, Hinton's protégé and an AI scientist; Greg Brockman, Stripe's CTO and an operations expert; Sam Altman, YC's CEO, who could coordinate everything; and Elon Musk, Tesla's founder, with the funds. At the dinner, Musk pledged $1 billion and suggested naming the project OpenAI—a nonprofit focused on developing safe AI for humanity's benefit, not profit. In 2017, Google published the seminal Transformer paper, revealing the key to processing vast amounts of data, requiring immense computational power (a realization Ilya Sutskever had early on at OpenAI). OpenAI began to run out of money (Musk donated $44 million and covered rent). Brockman and other OpenAI members proposed transforming the organization into a for-profit entity to raise funds from investors like Microsoft. Musk initially opposed this but later wanted majority ownership, initial board control, and the CEO role, even suggesting merging OpenAI with Tesla. But no one agreed, so Musk lobbied OpenAI researchers to join Tesla instead. Finally, Musk was voted out of the board for his antics. Before leaving, he cursed that OpenAI had a 0% chance of defeating DeepMind/Google. However, insiders close to Altman revealed that Musk was merely jealous of Altman's AI spotlight and focused more on defeating OpenAI than on AI safety. Musk's allies insisted his concerns about AI safety were genuine, such as developing xAI to replace OpenAI. Regardless, ousting the "dictator" Musk was Altman's first step to the throne.

Step 2: Emerging from the Chrysalis to Maximize Profits

In 2019, OpenAI received $1 billion from Microsoft to continue developing "good" AI. With $1 billion, you must reward your benefactors, which raised doubts among some veterans. Altman was flexible—he didn't cling to the nonprofit's reputation but to its shell. He innovatively crafted a new structure: OpenAI operates like a regular company, raising funds and granting equity, but investor returns are capped. Essentially, OpenAI became a for-profit company controlled by a nonprofit board. This unstable setup led to internal fractures. In 2021, Dario Amodei, founder of Anthropic, said: "A group within OpenAI, after creating GPT-2 and GPT-3, strongly believed two things: First, the more compute thrown at these models, the better they'd get, with no apparent ceiling. This view is now widely accepted, but we were early believers. Second, something beyond scale was needed—alignment or safety. Scaling compute alone doesn't impart values. So, we founded our company with this in mind."

While Anthropic appears safer and prioritizes accuracy, it too must reward Amazon, making ethical AI operations nearly impossible. Recently, Anthropic was accused of scraping websites millions of times in 24 hours. "Question, understand, become" may be the inevitable path for large AI startups.

Step 3: Purging Traitors and Ascending the Throne Like the Yellow Robe Ceremony in ancient China, where Emperor Taizu of Song was secretly dressed in a yellow robe by his troops, OpenAI witnessed a "Silicon Valley" version last November. Hundreds of OpenAI employees signed a petition demanding the resignation of "rebellious" board members, restoring Altman's position, or they'd join Altman and Brockman's new Microsoft subsidiary. Such prestige and influence are every CEO's dream. But there's another version to this story. Money was a crucial factor. Shortly before the coup, OpenAI organized a stock sale, allowing employees to cash out. But before they got their money, their boss was ousted. Some investors threatened to halt the tender offer if Altman didn't return. Losing the chance to retire early was infuriating. Signing the petition was popular, and with 95% of colleagues signing, it was hard to resist. Ilya Sutskever's failed coup and subsequent departure stemmed from his lack of understanding human nature and Altman's mastery of power politics. "The New King's" Silhouette: Autocratic, Deceptive, Profit-Driven, Indulgent

Back to 2016, OpenAI's office was Brockman's apartment—sofas, cabinets, even beds, served as workstations. This humble space gathered 20 top AI minds. Altman and Musk were infrequent visitors; Brockman and Ilya Sutskever held the fort. Ilya Sutskever was an AI visionary, while Brockman was OpenAI's operational backbone. Employees recall walks with Ilya Sutskever in San Francisco, discussing grand ideas and self-doubt about research paths. Ilya Sutskever had prescient AI insights, explaining complex concepts with simple analogies, like comparing neural networks to computer programs or circuits. Early on, Ilya Sutskever recognized that AI's leap forward came from data accumulation, not specific tweaks or inventions. In 2017, Google's Transformer paper inspired Ilya Sutskever to lead OpenAI in exploring and adopting the Transformer architecture, making it an early adopter. Brockman's diligence was legendary. A former employee recalls seeing Brockman at his computer every morning and night. His wedding was held at OpenAI, officiated by Ilya Sutskever, becoming part of the company culture. When joining OpenAI, both were financially secure, driven by a shared dream—using AI for humanity's future. They later parted ways but may converge again. First Crime: Autocracy Many saw November's coup as Altman's misfortune, but it wasn't. It starts with Anthropic. Altman and board member Helen Toner had tensions. In October, Toner published a paper favoring Anthropic's safety and mildly criticizing ChatGPT's shortcuts. Though mild, Altman was furious and secretly lobbied board members to oust Toner, falsely claiming one of her supporters wanted her gone. As we know, Altman was the one voted out. Second Crime: Deception The board wanted Altman gone due to mistrust. In July, OpenAI formed a superalignment team led by Ilya Sutskever and Leike to research AI safety. However, OpenAI prioritized launching "shiny products" over AGI safety. The promised 20% compute for the superalignment team was constantly eroded, eroding trust in building a responsible AGI. Interestingly, John Schulman's departure to Anthropic was also AI alignment-related: "I hope to deepen my focus on AI alignment and embark on a new chapter in my career, returning to hands-on technical work." In late July, OpenAI reassigned Aleksander Madry, Senior Director of Security, to AI Inference, but claimed he'd still work on security. Altman insisted OpenAI upheld its "at least 20% compute for the entire security team" pledge, subtly switching "superalignment team" for "entire security team."

Third Crime: Profiteering If deception escalates, rumors suggest last year's board ouster was related to Altman's hidden investments. In June, WSJ revealed Altman invests in over 400 companies, holding at least $2.8 billion in shares. His secret investment empire benefits from OpenAI's success.

On the surface, as CEO of OpenAI, Altman earns only $65,000 annually and does not hold any shares, as he does not want money to corrupt the secure development of AI.

In fact, in Altman's investment empire, an increasing number of companies are directly doing business with OpenAI, either as OpenAI customers or major business partners.

For example, OpenAI is in talks with nuclear energy startup Helion to purchase significant amounts of power for its data centers. The boss behind the scenes is Altman.

Another example is OpenAI's intention to pay to use Reddit content to train ChatGPT. Altman and entities he controls hold a 7.6% stake in Reddit. Even back in 2014, Altman briefly served as Reddit's CEO. After news of the partnership emerged, Reddit's stock price, which had been plummeting, surged 10%, resulting in a $69 million increase in Altman's shareholding, bringing his total holdings to $754 million.

The Fourth Sin: Negligence

Part of the reason Altman is negligent about AI safety is because he is overly optimistic?

If you knew the real answer, you might feel a mix of emotions.

In 2015, Altman clearly expressed his concern that advanced AI poses the greatest threat to human survival. He even mentioned in early interviews, with a flippant tone, that AI could potentially end the world.

Even when speaking to a New Yorker reporter, Altman revealed that he had stockpiled guns, gold, ammunition, antibiotics, and gas masks from the Israeli Defense Forces. He also mentioned owning a plot of land in Big Sur, California, to ensure a refuge in case of an apocalypse.

However, when asked about this again by the media, Altman changed his tune: "Isn't that every little boy's dream of a secret base?"

His sister Anne said this fits her understanding of her brother - someone who is extremely security-conscious and prepares to hoard resources for the worst-case scenario. It can be inferred that Altman does indeed have sufficient stockpiles, but he knows when to keep quiet.

It is said that Altman's ambitions continue to expand. For example, he is researching universal basic income to prepare for mass unemployment caused by AI. He has also launched the Worldcoin cryptocurrency project, which distributes income through eye scanning.

People who know Altman reveal that his goals extend far beyond this. He even aspires to become the "King of the World" and take over the entire globe. Indeed, Altman has considered running for Governor of California. Altman's former mentor Paul Graham said, "I think his goal is to control the entire future!"

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.