AI's capabilities exaggerated? Here are 10 reasons why it might be overestimated

08/13 2024 382

Is AI Overhyped? Since ChatGPT heralded the explosion of generative AI at the end of 2022, this technology has been buzzing in the industry and media. Countless investors have poured billions of dollars into AI-related companies.

However, more and more voices of doubt are emerging, and many people are beginning to question whether generative AI will truly have a disruptive impact on the economy.

Given the recent buzz around AI, one can debate the question: Is AI overestimated or underestimated?

Due to the significant uncertainty surrounding the eventual economic impact of AI, it's appropriate to maintain a nuanced personal stance on this issue.

We interviewed MIT economist Daron Acemoglu, one of the prominent skeptics in the AI field. The focus was on the question, "Will generative AI revolutionize the economy in the next decade?"

'Definitely not, unless many companies overinvest in generative AI,' said Acemoglu.

So, why is AI overestimated? To substantiate related viewpoints, here are some reasons.

01. The AI we have now isn't that 'intelligent'

When first using software like ChatGPT, it may seem magical – a truly thinking machine capable of answering any question.

But behind the scenes, these chatbots are fancy ways to aggregate the internet, presenting a mishmash of findings to users. Simply put, they're 'imitators,' fundamentally reliant on mimicking human creations and unable to generate truly innovative ideas.

Worst of all, much of the content AI duplicates is copyrighted. Some AI companies often feed people's work into their machines without authorization. This could be seen as systematic 'plagiarism.'

Indeed, at least 15 high-profile copyright infringement lawsuits against AI companies are underway. In The New York Times v. OpenAI, evidence suggests that in some cases, ChatGPT verbatim regurgitates news article paragraphs without attribution.

Fearing copyright infringement, AI companies are paying media companies for content. Meanwhile, many others are taking action to prevent AI companies from accessing their data. This could pose a significant problem for AI models reliant on human-generated data to masquerade as thinking machines.

The reality is that generative AI (AIGC) is far from achieving artificial general intelligence (AGI). As tech expert Dirk Hohndel puts it, these models are statistical predictors based on data patterns. They lack judgment or reasoning, struggle with basic tasks like math, and can't discern right from wrong or truth from falsehood.

02. AI lies

The AI industry and media refer to AI-generated falsehoods and errors as 'hallucinations,' which seem common. One study shows that AI chatbots hallucinate or conjure up false information 3% to 27% of the time.

AI hallucinations have embarrassed companies. Google had to revise its 'AI Summary' feature due to absurd errors like advising users to put glue in pizza sauce or eat rocks for health. Why rocks? Perhaps due to an article from satirical site The Onion in its training data.

'Hallucinations' make these systems unreliable. The industry is addressing the issue and striving to reduce errors, but progress is limited. Since models can't discern truth from falsehood, merely vomiting words based on data patterns, many AI researchers and tech experts believe we can't resolve hallucinations through these models, not in the short term, perhaps never.

03. AI can't handle most human jobs

A recent article asks, 'If AI is so good, why are there still so many translation jobs?' Translation has been at the forefront of AI R&D for a decade, with predictions it'd be among the first jobs automated away.

Despite AI advancements, human translators' jobs have actually grown. Translators increasingly use AI as a tool but may not replace humans due to AI's limited intelligence, social awareness, and reliability.

The same seems true for many other jobs.

Take fast-food order takers. For nearly three years, McDonald's piloted AI ordering in some stores, resulting in awkward experiments. Videos show AI making bizarre errors like adding $222 worth of chicken nuggets or bacon to customers' ice cream orders.

New York Times journalist Julia Angwin says, 'Generative AI's fate may resemble that of the Roomba – a mediocre vacuum cleaner that works fine when you're alone but falters with guests.'

'Companies that can replace workers with Roomba-quality AI will still try. But in quality-critical workplaces, AI may not make significant strides.'

04. AI's capabilities are exaggerated

Last year's news claimed AI excelled on the bar exam. OpenAI, behind ChatGPT, said GPT-4 scored in the 90th percentile.

But MIT researcher Eric Martinez found GPT-4 scored only in the 48th percentile.

With vast training data and Google-like search capabilities at their fingertips, are these results truly impressive? Humans could achieve similar scores with prior exam results and other cheating methods.

Google claimed its AI could discover over 2 million previously unknown scientific compounds. But UCSB researchers found most were false. Perhaps the research was flawed, or AI companies overly exaggerate their products' capabilities.

More shockingly, AI is widely touted as powerful in coding. Like translation, coding jobs are said to be endangered due to AI's prowess. But researchers found much AI-generated code was subpar.

While AI boosts coders' efficiency, quality seems to suffer. A Stanford study found coders using AI assistants wrote 'significantly less secure code.'

Bilkent University researchers found over 30% of AI-generated code incorrect, with another 23% partially incorrect.

A recent survey of developers found roughly half concerned about AI-generated code's quality and security.

The U.S. Census Bureau found only about 5% of businesses used AI in the past few weeks.

This leads to the next question…

The proportion of companies genuinely using AI is small, and their AI usage doesn't seem to significantly impact our economy. Some companies experiment, but those integrating AI into daily operations, mainly for personalized marketing and automated customer service, see mixed results.

Customers prefer talking to human customer service reps over AI chatbots.

Acemoglu calls this 'routine automation,' where companies replace humans with machines not because they're better or more productive but to save costs. Like self-checkout at grocery stores, AI chatbots in customer service often shift more work onto customers.

We haven't seen AI's killer app yet. In fact, AI fraud, misinformation, and security threats may be its most influential real-world applications.

07. Productivity growth remains disappointingly low

If AI truly revolutionizes the economy, we'd expect a surge in productivity growth and rising unemployment. But no surge is evident, and unemployment is near historic lows. Even in white-collar jobs, AI's impact is limited.

While generative AI may not replace humans in most jobs, as an information tool, it can aid humans in certain professions. AI's productivity benefits may take time to permeate the economy.

However, there's good reason to believe generative AI won't drastically change our economy soon.

In a recent paper, Acemoglu estimated generative AI's potential economic impact over the next decade.

'I wrote this paper with the belief that some AI impacts are exaggerated. First, generative AI barely touches large swathes of the economy: construction, catering, factories, etc.,' said Acemoglu.

In his view, generative AI won't handle most tasks outside offices in the next decade. He's unsure about autonomous vehicles' timeline but believes they're coming.

Looking at office work, Acemoglu found current AI models inadequate for many tasks – they're too dumb and unreliable.

AI proves at best a tool office workers can use to make their jobs slightly better, impacting less than 5% of human tasks.

Finally, Acemoglu predicts generative AI won't significantly boost productivity or economic growth in the next decade, increasing GDP by roughly 1.5% at most over 10 years.

When discussing AI, the conversation often turns to the future. For instance, things aren't great now, but in a few years, we'll all be unemployed, bowing to our robot overlords or whatever. But what evidence supports this? Is it just sci-fi movies' collective influence?

Claims of AI's rapid progress abound. Some say it's advancing exponentially, even the path to artificial general intelligence (AGI).

But there are serious issues. In fact, evidence suggests AI's development might be slowing down.

First, improvements rely heavily on feeding models vast amounts of data. The problem: they've essentially devoured the internet.

This includes consuming lots of copyrighted works. Meanwhile, as AI easily 'steals' company data, most firms restrict AI's data access.

Furthermore, data quality in these systems is questionable. Sites like The Onion and 4chan might help these systems mimic online humans but might not aid truly beneficial economic applications.

Even if AI companies overcome these hurdles, real-world data is limited. Researchers race to find more data, discussing issues like creating 'synthetic data,' but progress here is uncertain.

Second, special microchips powering AI are scarce, posing a huge cost and headache for AI companies.

OpenAI CEO Sam Altman has tried to convince investors to spend trillions transforming the global semiconductor industry and funding other improvements for ChatGPT. Is it worth it? Will investors really recoup their money?

Finally, data centers powering AI consume vast amounts of electricity, a significant cost. Can these companies recoup costs for building and powering data centers? Are consumers willing to pay AI's high operating costs?

This is fundamental to these companies' business models but also to America's grid and environment.

09. AI could be very harmful to the environment

AI consumes enough energy to power a small country. Goldman Sachs researchers found, 'The proliferation of generative AI technologies – and the data centers needed to power them – will drive a generational increase in U.S. electricity demand.'

10. AI is overestimated, while humans are underestimated

The primary reason AI is overestimated is that AI 'can never experience what it's like to be human.'

'Many in the industry don't recognize how talented and diverse human skills and abilities are,' said Acemoglu.

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.