Behind AI's 'domination' of the Nobel Prizes, the depreciation of knowledge has begun

10/11 2024 464

In the past few days, the Nobel Prizes have been gradually announced, and AI has emerged as the biggest winner.

On October 8th, the Royal Swedish Academy of Sciences announced that the 2024 Nobel Prize in Physics was awarded to American scientist John J. Hopfield and British-Canadian scientist Geoffrey E. Hinton, in recognition of their use of physics tools to develop the foundational methods of today's powerful machine learning technologies.

A day later, the Royal Swedish Academy of Sciences announced that the 2024 Nobel Prize in Chemistry would be awarded to David Baker, Demis Hassabis, and John M. Jumper, in recognition of their contributions to protein design and protein structure prediction using AI.

In simple terms, the Nobel Prize jury awarded the Nobel Prize in Physics for machine learning, and the Nobel Prize in Chemistry for AI-based protein structure prediction and protein design.

Why has AI suddenly won two Nobel Prizes? What trend lies behind AI's ascendancy to the Nobel Prize stage?

/ 01 / AI Wins Two Nobel Prizes in Succession

Let's start with the Nobel Prize in Physics winners, Hopfield and Hinton.

In 1982, Hopfield created associative neural networks, now commonly known as Hopfield networks, which can store and reproduce associative memories of images and other data patterns.

In simple terms, the problem solved by Hopfield networks is how humans perform associative memory, i.e., how to recall an entire memory from just a part of it. For example, hearing someone's name can evoke their appearance in your mind.

As a leader in the field of deep learning, Hinton's greatest contribution lies in the development of a new neural network: the Boltzmann machine.

In our brains, neurons interact with each other, and the decisions of some neurons can influence those of others. Using a metaphor from a user on Zhihu:

Some neuronal decisions are observable, such as when someone watches the animated film "Three Thousand Li to the Capital." But other neuronal decisions are invisible, such as why someone watches "Three Thousand Li to the Capital" - perhaps because they love Tang poetry, or because they enjoy animation, or because they want to watch it with someone they care about.

The Boltzmann machine aims to uncover the interactions between these visible and invisible neurons.

The emergence of the Boltzmann machine greatly accelerated the rapid development of machine learning. Especially in the early stages of deep learning, the Boltzmann machine was used to pre-train deep neural networks, helping them find appropriate initial weight states before engaging in more complex learning tasks.

Now, let's turn to the Nobel Prize in Chemistry.

Among the winners of the Nobel Prize in Chemistry, David Baker pioneered methods for designing and predicting the three-dimensional structures of proteins, creating novel proteins and solving medical challenges through innovative software and algorithms.

Demis Hassabis and John M. Jumper, on the other hand, co-created AlphaFold, an AI tool for protein structure analysis, ushering in a new era of protein prediction.

If we compare organisms to completed Lego sets, large molecules like proteins and nucleic acids are like individual Lego bricks. For the past fifty years, understanding the shape of each Lego brick has been the primary task of structural biologists.

But this is no easy feat. Proteins are polymers formed by the connection of 20 different amino acids in a specific sequence, which usually fold into a particular shape. Therefore, to truly understand how proteins function, scientists must accurately grasp their spatial structure.

Protein structures are classified into four levels, from simple to complex. The primary structure is relatively easy to determine through simple biological experiments such as mass spectrometry. However, when it comes to secondary and higher-order structures, structural biologists often rely on techniques like X-ray crystallography, nuclear magnetic resonance (NMR), electrophoresis, and cryo-electron microscopy (cryo-EM) for detection.

These methods are time-consuming, labor-intensive, and costly. For instance, electrophoresis can only measure indirectly and is prone to interference from various factors, affecting the analysis and understanding of protein structures. Cryo-EM, which offers high-resolution imaging, is extremely expensive, costing around 100 million Chinese yuan per machine. As of this year, there are only over 60 cryo-EM machines in China.

The remarkable aspect of AlphaFold lies in its ability to predict higher-order protein structures quickly and accurately through deep learning models, significantly enhancing the efficiency of protein research.

In 2021, AlphaFold predicted 350,000 protein structures, including 98.5% of human proteins, and made these structures available in the AlphaFold-EBI database. By 2022, the number of proteins in this database exceeded 200 million, encompassing nearly all possible proteins on Earth.

In essence, AlphaFold has virtually single-handedly revolutionized the prediction of protein structures, a crucial step in unlocking the secrets of human life.

/ 02 / The End of Knowledge is AI

While the awarding of the Nobel Prize in Physics to machine learning has been controversial, it is an undeniable fact that AI has infiltrated virtually all disciplines and exerted a significant impact.

The reason is simple: AI learns much more efficiently than humans. For a long time, Hinton believed that human intelligence surpassed that of AI. However, in recent years, his views have shifted as he has observed that AI excels in knowledge dissemination efficiency, learning mechanisms, and energy efficiency.

In terms of knowledge dissemination, when one AI agent acquires a piece of knowledge, all other AI agents can immediately learn it. In contrast, humans can only learn by observing and imitating teachers, a process that is time-consuming and inefficient.

Regarding learning mechanisms, the human brain contains 100 trillion connections, while GPT has only one trillion, far fewer in number. Nevertheless, GPT, with over 170 billion parameters, has memorized all human knowledge and civilization and can engage in abstract thinking.

This suggests that AI is better at organizing vast amounts of knowledge within one trillion connections. In other words, AI may have discovered superior learning methods to those of humans.

Under the formidable learning capabilities of AI, knowledge is rapidly depreciating. Vinod Khosla, an early investor in OpenAI, once predicted that virtually all professional knowledge will be made freely available by AI in the future.

Nick Bostrom, a professor at the University of Oxford, shares a similar view, albeit more extreme. He believes that undergraduate and doctoral programs will depreciate rapidly, and that traditional 20-30 year investments in human capital centered on knowledge transfer will yield no returns.

At the same time, however, the importance of interdisciplinary knowledge may increase further, leveraging computer tools and theories from other disciplines to help solve academic challenges in fields such as physics, chemistry, materials science, biology, and medicine.

In other words, those who excel in AI in the future are likely to work more effectively, make significant discoveries, and even compete for Nobel Prizes across various fields, compared to those who resist AI.

One day, it may even be possible for someone using GPT-X to write an article to win the Nobel Prize in Literature.

Written by Lin Bai

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.