06/11 2024 366
Editor: Caiyun
In 2024, Sam Altman, the CEO of OpenAI, has been far from idle, seemingly always on the path of resolving troubles. Recently, he has encountered more trouble.
On June 4th local time, 13 current and former employees from OpenAI and Google DeepMind jointly released an open letter, sparking widespread attention from the outside world. In the letter, these employees expressed serious concerns about the potential risks of artificial intelligence technology and called on relevant companies to take more transparent and responsible measures to address these risks.
It's no exaggeration to say that this time, OpenAI and Google have been criticized.
Multiple well-known figures have jointly endorsed the letter, even including the "godfather of AI".
This open letter, jointly issued by 13 current and former employees of OpenAI and Google DeepMind, on the one hand, exposed OpenAI's reckless and secretive culture; on the other hand, it emphasized the severe risks of cutting-edge AI technology and expressed concerns about AI companies prioritizing profits, suppressing dissenters, and evading regulation in the development and promotion of AI technology.
The letter pointed out that while AI technology may bring enormous benefits to humanity, the risks it poses cannot be ignored, including exacerbating social inequality, manipulating and disseminating false information, and the possible human extinction caused by the loss of control of autonomous AI systems. These risks are not baseless, they have been confirmed not only by AI companies themselves but also by governments and other AI experts around the world.
The signatories of the open letter expressed their hope that under the guidance of the scientific community, policymakers, and the public, these risks can be adequately mitigated. They showed a strong sense of忧患 (worry), fearing that AI companies, motivated by strong economic incentives, may avoid effective oversight, and that existing corporate governance structures are completely insufficient to address this issue.
The letter also mentioned that due to strict confidentiality agreements, employees face various restrictions when expressing concerns about AI risks, and there are currently serious deficiencies in the protection measures for whistleblowers, with many employees fearing retaliation for raising criticisms.
It is worth mentioning that this letter was endorsed by multiple well-known figures, including Geoffrey Hinton, known as the "godfather of AI," Yoshua Bengio, who won the Turing Award for groundbreaking AI research, and Stuart Russell, a scholar in the field of AI safety.
The demands and original intentions of the signatories of the open letter are to require the lab to make broader commitments to transparency.
So, what are the voices of those who signed the open letter?
Daniel Kokotajlo, a former employee of OpenAI, was one of the signatories of the joint letter. He once posted on social media, mentioning, "Some of us who recently resigned from OpenAI have gathered to demand broader commitments to transparency from the lab."
According to the information, one of the main reasons for Daniel's recent resignation from OpenAI was his disappointment and loss of confidence in OpenAI's failure to take responsible action in building general artificial intelligence. In Daniel's view, AI systems are not ordinary software; they are artificial neural networks that learn from vast amounts of data. Although scientific literature on interpretability, alignment, and control is rapidly increasing, these fields are still in their infancy. If not handled carefully, they may cause catastrophic consequences for some time. Daniel had proposed to the company that he hoped it would invest more funds in safety research while making AI functions more powerful, but OpenAI did not make corresponding adjustments, leading to the successive resignation of multiple employees.
In Daniel's opinion, the systems built by labs like OpenAI can indeed bring tremendous benefits to human society, but if not handled carefully, they may bring catastrophic consequences. Even more terrifying is that current AI technology is hardly effectively regulated, mainly relying on companies for self-regulation, which poses significant risks.
Daniel also revealed that upon leaving his job, he was required to sign a "non-disparagement agreement" containing a non-disparagement clause that prohibited him from publishing any criticisms of the company, otherwise he would lose his vested equity. However, after careful consideration, he still decided to forgo signing the agreement.
Similarly, in April this year, Leopold Aschenbrenner also left OpenAI. Unlike Daniel, he was not voluntarily resigned but was fired by OpenAI for allegedly leaking company secrets.
Aschenbrenner was a member of OpenAI's former Superalignment team. According to insiders, the real reason for his dismissal was that he shared an OpenAI security memo with several board members, which led to dissatisfaction from OpenAI's senior management. It is said that OpenAI clearly told him at the time that the main reason for his dismissal was this memo.
Afterward, Aschenbrenner launched a website, summarizing the information he learned during his time at OpenAI in a 165-page PDF document. This document is considered to be a "programmatic document for AI development in the next decade proposed by the most radical AI researchers in Silicon Valley." Aschenbrenner believes that deep learning has not yet encountered a bottleneck, and by around 2030, AGI is likely to develop into a super intelligence that comprehensively surpasses humans. For this, humans seem unprepared. What AI brings is not just what most experts consider "another internet-scale technological revolution."
The resignation of the security director, the disbanding of the Superalignment team, and the "betrayal" of multiple key members from OpenAI...
In fact, more than just the above two individuals have left OpenAI, and many leaders from core departments have successively departed.
Among them, the most representative are Ilya Sutskever, the former chief scientist of OpenAI, and Jan Leike, the former head of the security and Superalignment teams.
Like Aschenbrenner and Deniel, Leike publicly criticized OpenAI's management after leaving, accusing it of pursuing flashy products while ignoring the safety issues inherent in AGI itself. He said that in the past few months, the Superalignment team has been sailing against the wind, facing numerous obstacles within the company on the path to improving model safety.
Ilya has remained silent after the OpenAI palace intrigue. Even when he officially announced his resignation, he did not elaborate much on his position and views. However, in a previous video, Ilya mentioned that once AGI is fully realized, AI may not necessarily hate humans, but their treatment of humans may be similar to how humans treat animals: people may not intentionally harm animals, but if you want to build an intercity highway, you won't consult animals but do it directly. When faced with such a situation, AI may also naturally make similar choices.
The palace intrigue that erupted at OpenAI last November gave us a glimpse of the big model safety issues within OpenAI and the increasingly polarized two camps. One camp is represented by Sam Altman, the "development faction"; the other is represented by Ilya, the "safety faction." The fight between the two factions has now come to a clear conclusion: the safety faction represented by Ilya has basically lost its voice, and even the once highly regarded Superalignment team has been uprooted.
Under such circumstances, the future has become uncertain. We cannot predict whether the radical development faction represented by Sam Altman will continue to advance unimpeded. If that is indeed the case, how humanity will balance its relationship with generative AI in the future will also become a significant unknown.