Predictions about AGI backed by trillions of dollars in investment

11/26 2024 568

Emergency balance alert for 2024, and a wave of #LookingAheadTo2025 is on the horizon.

Does anyone remember the tech giants' outlooks for 2024?

Greg Brockman, co-founder of OpenAI, once predicted that 2024 would be a groundbreaking year in terms of AI capabilities, safety, and positive impact. In the long run, 2024 is just another year of exponential growth, making everyone's life better than today.

Over the past year, not only did GPT-5 fail to materialize, but Brockman himself narrowly avoided stepping down, and even the revered Scaling Law recently encountered a 'hiccup'.

Jim Fan, a senior scientist at NVIDIA, predicted that 2024 would be the 'Year of Video'. Although robotic and embodied AI agents are still in their infancy, video AI will witness breakthroughs in the next 12 months.

Over the past year, various outstanding video generation products have been released, and everyone is in an accelerated state. However, the video generation field has not yet witnessed a 'GPT moment', and commercialization remains a challenge.

Sora, which amazed the world at the beginning of the year, has been delayed since its debut. The reason behind this is reportedly due to censorship issues. On the one hand, there is a need for in-depth discussions with the government regarding security risks; on the other hand, Hollywood and artists are urgently needed for collaboration.

Another possibility is the exorbitant cost. Factorial Funds estimates that Sora's computational demand during the training phase is several times higher than that of LLM, requiring at least one month of training on 4,200-10,500 H100 GPUs. If Sora is widely applied, such as generating 50% of TikTok's videos and 15% of YouTube's videos with AI, approximately 720,000 H100 GPUs would be needed for the inference phase, costing roughly $21.6 billion. Mira, the former CTO of OpenAI, mentioned that the company hopes to consider opening up once the cost is comparable to that of Dall·E.

Torsten Sløk, chief economist at Apollo, once wrote that the 'severity' of the AI bubble not only surpassed that of the 1990s but also peaked beyond the dot-com bubble.

Over the past year, generative AI applications are still in their early stages, but this has not deterred tech giants from placing their bets. Sequoia calculated that there could be a $600 billion gap between expected AI revenue and infrastructure investment. The current situation is stable, but history teaches us that bubbles take a long time to burst...

So-called 'predictions' often turn out to be correct in 'direction' but wrong in 'timing'. Although it's hard to pinpoint which prediction is incorrect, the current 'perception' is unclear.

Nassim Taleb, the father of 'black swan' theory, introduced the concept of 'Fragilista' in his book 'Antifragile,' referring to individuals or institutions in suits and ties who increase vulnerability within the system. They use 'predictions' to map out future roadmaps and tend to ignore what they don't understand.

However, if these 'prophets' are fully involved, and their 'predictions' affect their own interests, the situation changes drastically. For example, leaders like Greg Brockman and others who have dedicated themselves to AI may exaggerate or misjudge timelines but won't just talk the talk without walking the walk.

Some predictions, however, are just too exaggerated.

In a recent YouTube interview, when asked about his expectations for 2025, Sam Altman, CEO of OpenAI, replied: AGI? I'm excited about it. We're about to have a 'child,' and it's the most exciting thing of my life.

With GPT-5 seemingly stuck, do you believe in achieving AGI by 2025 or that I am the First Emperor of Qin? What are the predictions about AGI, and how many steps are left to achieve it? An article from Business Insider, 'Those bold AGI predictions are suddenly looking stretched,' offers an explanation.

Running after an illusion, 'achieving AGI next year' is comparable to 'colonizing Mars next year.'

A review of tech giants' predictions about AGI reveals three main timelines: 2026, 2029, and 2034.

First Tier: Within 3 Years

Sam Altman, CEO of OpenAI: Full of expectations for achieving AGI by 2025.

Elon Musk, 'Tech King of America': AGI will emerge by 2026 at the latest.

Dario Amodei, founder of Anthropic: Predicts AGI by 2026.

John Schulman, co-founder of OpenAI: AGI will be achieved in 2027, with ASI arriving in 2029.

Second Tier: Within 5 Years

Geoffrey Hinton, Nobel Laureate and AI Guru: Expects to see AGI within five years.

Jensen Huang, founder and CEO of NVIDIA: AI will pass any human test in the next five years.

Ray Kurzweil, chief researcher at Google and author of 'The Singularity is Near': Predicts AGI will arrive in 2029.

Third Tier: Within 10 Years

Demis Hassabis, Nobel Laureate and founder of DeepMind: Achieving AGI will take 10 years and require 2-3 major innovations.

Masayoshi Son, CEO of SoftBank: AI will be 10,000 times smarter than humans within 10 years. (Directly predicting ASI)

Of course, there are also some 'pipe dream' combinations.

Yann LeCun believes that AGI will not emerge in the short term. At least, it won't suddenly appear like in Hollywood sci-fi movies. It's more likely to be a gradual process rather than a sudden 'switch-on' moment. Before achieving true 'human-level' AI, we're more likely to see 'cat-level' or 'dog-level' low-intelligence AI.

Andrew Ng is skeptical about claims that AGI is imminent: I hope to see AGI in our lifetime, but I'm not sure.

Gary Marcus, an AI expert, has stated that if we continue down the path of deep learning and language models, we will never achieve AGI, let alone ASI. These technologies have flaws and are relatively weak, only progressing through more data and computational power.

Pedro Domingos, a professor of computer science at the University of Washington and author of 'The Master Algorithm,' has asserted that ASI is just a pipe dream.

Predictions about AGI are backed by trillions of dollars in investment. It is undoubtedly a crucial direction for future technological development, but it's more important to distinguish between what is realistically feasible and what is overly hyped.

Alistair Barr, the author of the article, believes that warning signs are already emerging.

Most pressing is the 'hitting the wall' of the Scaling Law: Ilya Sutskever, co-founder of OpenAI, clearly stated that results relying on expanding model scale seem to have stagnated; Noam Brown, an OpenAI researcher, said that expanding the model would fail at some point; Google's next-generation Gemini did not meet expectations, and internal reassessment of training data usage is underway.

Even 'tech optimists' are starting to tread carefully with their investments.

Marc Andreessen and Ben Horowitz, founders of a16z, doubt whether LLM can maintain its current momentum.

Andreessen said: Currently, it seems that AI models have hit some sort of ceiling in capabilities. Of course, many smart people in the industry are trying to break through this ceiling. However, if you just look at the data and performance trends, the improvement speed of AI models is slowing down, showing a trend of 'hitting the ceiling.'

Horowitz pointed out obstacles: Even if the chips are in place, we might not have enough power to support them. And with power, we might lack effective cooling methods. Although GPU computing power continues to increase, AI model performance has not grown at the same pace, indicating that hardware upgrades alone cannot solve all problems.

If this technological bottleneck cannot be broken, the possibility of achieving AGI in the short term is almost zero. Currently, Google has not given a clear response; Sam Altman directly stated that there is no wall being hit; Anthropic said they have not yet detected any deviation from the Scaling Law.

Interestingly, Alistair Barr explained why Sam Altman is 'stubborn.'

On the one hand, if OpenAI achieves AGI, it may escape Microsoft's huge 'control.' OpenAI's official website states that once AGI is achieved, the resulting intellectual property will not be bound by the existing agreement with Microsoft.

On the other hand, Altman's AGI goal is purely a vision, similar to Musk's obsession with Mars colonization and self-driving cars - even if predictions are missed time and again, they always ignite team enthusiasm.

Therefore, the grand goal of 'achieving AGI by 2025' is undoubtedly more exciting than relatively mundane goals like 'automating company billing,' despite the latter potentially having more short-term commercial value.

History shows that technological development is full of uncertainties. For example, after experiencing long-term stable progress, some technologies may suddenly fail. The classic example is 'Moore's Law.' This law, a beacon for the semiconductor industry, ignited innovation enthusiasm across the tech world with its prediction of 'doubling every two years,' laying a solid foundation for the rise of giants like Intel.

However, research from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) suggests that the magic of Moore's Law is fading.

For example, from 2014 to 2019, Intel encountered bottlenecks in advancing from 14nm to 10nm technology, taking five years to achieve what was expected in two. Since investors realized in 2019 that Moore's Law no longer applied, Intel's share price has fallen by about 50% and has yet to fully recover.

These phenomena indicate that technological progress may not be eternal, and the arrival of AGI is not imminent.

Four Major Hurdles on the Path to AGI

A recent speech by Alexandr Wang, founder and CEO of Scale AI, was enlightening.

He divided the modern AI era into three main stages:

The first stage is the research stage (2012-2018), kicked off by AlexNet, the first deep neural network. This was an era when AI could only tell you if there was a cat in a YouTube video.

The second stage is the scaling stage (2018-2024), initiated by the Transformer and GPT-1 trained by OpenAI's Alec Radford. During this period, resource investment increased more than tenfold, leading to a tremendous performance boost. Model capabilities evolved from the obscure GPT-1 to the doctoral-level o1 model.

The third stage will be the innovation stage, starting with the o1 model and continuing until superintelligence emerges. We'll have to wait and see if this stage lasts six years or less. This stage is marked by $200 billion already invested in models, and large companies cannot invest more than that. We can't pour $200 trillion into models. Therefore, in terms of magnitude, there is limited room for further scaling. Once money is no longer the limiting factor, true innovation is needed, which undoubtedly includes enhancing reasoning abilities and computational capabilities for testing time.

Wang believes there are five major challenges on the path to AGI: the data wall, overfitting in evaluation, unreliable agents, chips and energy, and international competition.

The first challenge is the data wall. Epic AI predicts a timeframe between 2027 and 2030. However, if you talk to insiders, they'll say it's sooner. Currently, there are a few main solutions.

For example, cutting-edge data, various forms of synthetic data, and more advanced data types, as well as enterprise data. These data types can help us more effectively learn advanced concepts such as reasoning abilities, multimodalities, and agent data. Furthermore, embodied AI and the real-world data it requires will be a crucial area. In summary, most data remains private and proprietary, locked away.

For instance, the training dataset for GPT-4 is approximately 0.5 PB. In contrast, JPMorgan's proprietary dataset exceeds 150 PB. They are just one of many large enterprises. There is a vast amount of data lying unused for any significant training.

The second challenge is evaluation. This is often discussed within the AI community but less understood outside. Evaluation is the yardstick we use to measure model progress. Currently, many evaluations are saturated or prone to overfitting; overfitting means they've been somewhat 'gamed,' while saturation indicates that models are already performing exceptionally well on all evaluations. This suggests that research may become more aimless. If you look at tests like MMU, mathematics, and GPQA from the past few years, model performance seems to have hit a bottleneck. But this isn't because models haven't improved; it's because these evaluations are no longer challenging enough. To address this, we need to establish more rigorous evaluations.

The third challenge is agents. While everyone talks about agents, they haven't truly arrived and are unreliable. We see a strong resemblance between AI agents and the 'L1-L5' levels in autonomous driving. This analogy is apt: L1 is a chatbot; L2 is an assistant you can seek various help from. L3 refers to agents used for specific parts of workflows, and you can start relying on them; L4 could be a game-changer, where agents seek human help when needed, more like a remote operation mode. First, let models develop reasoning abilities in every domain, ultimately becoming useful in almost every field. Second, build infrastructure that enables remote operation of agents. In the future, most of us may just be remote operators for AI agents.

The fourth challenge is chips and energy. Conservatively estimated, these data centers will require 100 gigawatts of power within the next five years, and that might not be enough. This is equivalent to the energy consumption of 20 Chicagos, requiring trillions of dollars in capital expenditure. I don't have a solution here, just pointing out this challenge.

Conclusion

AGI is seen as the 'Holy Grail' pursued by humanity. Once achieved, the world will be completely transformed.

If AI attains 'godlike' abilities, it might become an embodiment of 'god' itself.

Whether it's in two, three, five, or ten years, AGI will eventually be achieved. How much time do humans have left for 'transformation'?

Perhaps, predicting the future is less important than predicting 'fragility.'

As Sam Altman said, 'I never pray for God to be on my side; I hope to be on God's side.'

How to make AI beneficial to me is a question everyone needs to think about.

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.