Before AI exterminates humans, OpenAI has already been harshly criticized

06/12 2024 546

Author | Sun Pengyue

Editor | Da Feng

The controversy surrounding OpenAI continues, even rising to the historical level of human survival.

Just last week, 13 employees from leading AI company OpenAI and Google (7 former OpenAI employees, 4 current employees, 1 current DeepMind employee, and 1 former employee) publicly issued a joint letter titled "The Right to Warn Against Advanced AI."

In the letter, they expressed in a very alarming tone: "The risk of human extinction posed by artificial intelligence."

Sam Altman, who wields absolute power

At present, this open letter has caused an earthquake in the AI world.

It even garnered the signature support of two 2018 Turing Award recipients, Yoshua Bengio and Geoffrey Hinton.

Employees of OpenAI and Google exposed a very alarming statement in the letter:

"AI companies are aware of the real risks of the artificial intelligence technologies they are studying, but because they do not need to disclose too much information to the government, the true capabilities of their systems remain a 'secret'. Without proper regulation, AI systems can be powerful enough to cause serious harm, even 'extinction of humans'."

According to the letter, current AI companies are aware of the severity of security, but there are differences and even divisions among senior management, with most believing that profits are paramount and a small number believing that security is more important.

It must be said that this open letter pointed to OpenAI's internal issues, even implicitly referring to last year's "OpenAI coup crisis" and the downfall of Chief Scientist Ilya Sutskever.

In November last year, OpenAI experienced a coup, with Chief Scientist Ilya Sutskever teaming up with the board of directors to fire CEO Sam Altman and President Greg Brockman.

This palace intrigue attracted worldwide attention, ultimately leading to the downfall of Microsoft, the financial backer behind it. Ilya Sutskever's coup failed, and Sam Altman resumed his position.

It is understood that the biggest contradiction between the two sides lies in AI security.

Ilya Sutskever

Sam Altman accelerated OpenAI's commercialization, establishing the GPT app store and allowing private GPT robots to be created, even opening the door for "gray robots" to enter ChatGPT, despite the uncertainty of whether security would bring severe consequences.

Ilya Sutskever, who is primarily responsible for AI oversight at OpenAI, insists that AI security is the core of OpenAI, not commercialization, and is very dissatisfied with Sam Altman's aggressive acceleration of commercialization.

After multiple high-level meetings and disputes in 2023, a coup was ultimately implemented.

After the coup failed, Sam Altman completely turned OpenAI into a "one-man show," dismissing several dissenting directors and laying off Ilya Sutskever after hiding him for six months.

The 10-year-old partner who co-founded OpenAI was reluctantly expelled, but Sam Altman did not stop there, even regarding the security department as "rebels" and purging them one by one.

The Super Alignment Team, which Ilya Sutskever was responsible for, was originally prepared for the emergence of super-intelligent AI capable of outsmarting and overwhelming its creators. After Ilya Sutskever's departure, the team was directly disbanded, and several related researchers resigned.

The dissolution of the OpenAI risk control team, which was responsible for the

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.