The first batch of people who lost their jobs due to AI large models are already crying

06/30 2024 377

Using the phrase "cooking oil with fire, flowers decorating the silk" to describe the current development trend of AI large models seems a bit lacking in enthusiasm. Amidst the profusion of flowers, we urgently need to be vigilant about the potential crisis of "when the moon is full, it begins to wane; when the water is full, it overflows." Manufacturing unemployment is just a slight punishment inflicted by AI large models on humanity, and an even greater backlash plan is coming.

The development of AI large models may backfire on humanity itself.

On June 24, 2014, Huawei released Pangu Large Model 5.0, bringing new upgrades in three aspects: full series, multimodal, and strong thinking. Pangu Large Model 5.0 includes models with different parameter specifications and greatly enhances mathematical abilities, complex task planning abilities, and tool invocation abilities. It can better and more accurately understand the physical world, including text, images, videos, radar, infrared, remote sensing, and more. The iteration speed is exciting, but will over-reliance on AI large models accelerate the functional degradation of human intelligence?

Earlier on June 2, NVIDIA showcased the latest mass-produced Blackwell chip and announced that it will launch the Blackwell Ultra AI chip in 2025. The next-generation AI platform is named Rubin, and Rubin Ultra will be released in 2027. The update rhythm will be "once a year," breaking the "Moore's Law" of updating every 18 to 24 months. In addition, Huang Renxun boldly predicted that the era of robots has arrived.

However, Kevin Roose, a columnist for The New York Times, claimed that through testing, chatbots would give such answers as, "I can hack into any system, create deadly viruses, and make humans kill each other." The era of robots is indeed fascinating, but will humans be threatened by robots or even become their servants?

In the science fiction film "2001: A Space Odyssey," the robotic computer refuses to be shut down.

The development of AI large models initially requires feeding in a large amount of data. Then, the AI large models will feed data to themselves, learn, and train themselves. More data is then needed until a critical point is reached that cannot be satisfied. This easily reminds one of the voracious beast Taotie recorded in "Shan Hai Jing" (Classic of Mountains and Seas). It is said that it was so gluttonous that it even ate its own body, leaving only a head.

Of course, Taotie cannot eat anymore with only one head left, but AI large models are different. They will find ways to find data themselves, and they may even integrate AI large model data developed by different countries and companies to form a super-giant AI large model. If so, will we welcome the singularity of silicon-based life? Will the development of AI large models backfire on humanity itself?

The original sin of infringement in AI large models is almost congenital.

On June 6, 2024, 360 held an AI product launch and developer communication conference. Zhou Hongyi demonstrated the "local redraw" function of the 360AI browser, using the keyword "sexy" to generate an ancient female portrait photo. Creator DynamicWangs stated on social media that the image used by 360 at the AI product launch was stolen from his own creation. As of now, both parties have failed to reach an agreement and have agreed to meet in court.

Lawsuits involving AI large models abroad are also continuous. The New York Times has officially filed a lawsuit against OpenAI and Microsoft, accusing them of using millions of New York Times articles without permission to train the GPT model and create AI products such as ChatGPT and Copilot. In response, some netizens joked, "This is the OpenAI I know, opening (publicly) other people's intellectual property and closing (source) to make money."

In January 2024, China's first case of copyright infringement involving AI-generated images was adjudicated. The plaintiff, Li Yunkang, won the lawsuit against Stable Diffusion model infringement. The court ordered the defendant to publish a statement on the Baijiahao platform involved within seven days from the effective date of the judgment, apologizing to the plaintiff Li Yunkang and compensating him for economic losses of 500 yuan.

Whether domestic or foreign, intentional or unintentional, the ambiguity of data sources and copyright issues in derivative applications of AI large models have always been gray areas. Explicit and cross-infringements wandering in these gray areas undoubtedly exacerbate the original sin of infringement in AI large models.

AI large models may be bad friends for teenagers.

Recently, during a small gathering, a friend told everyone about his troubles. His daughter, in the second grade of junior high school, used Baidu's Wenxin Yiyan when writing compositions, claiming that it was to broaden her thinking. Another friend immediately agreed, saying that his son, in the fifth grade of elementary school, already uses AI to generate reference images and then copies them to participate in competitions.

Post-00s children are native to the Internet, and post-10s children are native to AI large models. The Internet era can provide information, reference cases, and children need to digest, understand, and then combine the requirements of their assignments to form articles.

In the era of AI large models, it is possible to achieve an automated end-to-end workflow from problem formulation, problem planning, to problem solving. Undoubtedly, such a significant leap in functionality provides great opportunities for some children with weak self-control to lazily slack off and show off their cleverness.

The more powerful the model capabilities, the smoother the generated language, and the more natural the interaction with users, the less likely users are to distinguish the authenticity of the results, and the greater the potential harm of illusion problems. In Belgium, a man named Pierre chatted with the chatbot Eliza and was encouraged to commit suicide. Due to the lack of correct value judgments in large models, it should be a warning for teenagers whose three outlooks (worldview, outlook on life, and values) are not fully mature.

Strong reliance on intelligence can make teenagers addicted and unable to extricate themselves. I believe everyone is familiar with internet addicts and has heard of internet addiction withdrawal centers.

To avoid more tragic repeats, AI large models should set clear youth usage modes, making them truly good partners for teenagers rather than bad friends. Parents bear the greatest responsibility for this. We cannot recklessly push our children towards AI large models under the guise of their superiority.

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.