11/04 2024 451
During a meeting a few days ago, the organizer asked me to predict what the internet would look like in ten years. I made a joke. I think that in ten years, the number of users on internet platforms will soar, but they will all be robots posting, and the readers will also be robots. Robots organize different teams to argue fiercely under posts, and humans can only be bystanders. -- Zhou Hongyi.
Algorithms cater to human nature, AI points directly to the answer
On November 1, Zhou Hongyi uploaded an article titled "Who Will Become Slaves to AI?" on his Weibo homepage.
He said that after the emergence of AI technology, many companies have regarded assisted writing and automatic writing as important functional modules. Although this feature is very practical, it seems to have cleared our writing pressure overnight, allowing us not to worry about wording anymore.
However, as time goes by and more people use AI to write, the future world may become increasingly divided, eventually roughly categorizing people into two groups: those who can write and those who cannot.
In Zhou Hongyi's view, the decline in the population base that possesses writing skills is not a good thing.
Before the industrial era, physical labor was still the dominant theme of human work, so most people were physically strong at that time. But after the industrial era arrived, the proportion of mental labor gradually surpassed physical labor. When the body cannot be passively exercised during work, if you still want to become strong, you must take the initiative to go to the gym and sweat.
The same is true for writing.
Zhou Hongyi views writing as a path for presenting thinking, and even argues that "writing is thinking, and thinking can only be completed through writing." If you don't write, then your so-called "thinking" can only be self-perceived thinking.
Those who rely on AI for automatic writing are abandoning their ability to think, and a differentiation movement regarding "human thinking ability" is underway. Meanwhile, the rapid development of short video algorithms is also making people more willing to accept passive indoctrination rather than actively choose.
Algorithms cater to human nature, and AI provides answers.
Technological advancements have fueled human laziness, and those who passively accept opinions will eventually choose to stay in their information cocoons.
Judging from the development path of AI, they will first replace those who cannot use AI, followed by those who cannot think. In the future, abandoning the ability to think actively will also mean being willing to entrust one's time, wealth, and life to AI for planning, becoming slaves to the other side.
Honestly, Zhou Hongyi is not the only one who worries that AI will become the "enemy of humanity.""Ipsos conducted a survey of over 4,000 American adults and found that 61% of respondents believed that the rapid development of AI technology could threaten the future of humanity; over two-thirds of respondents expressed concern about the potential negative impacts of AI technology. This fear of AI has also influenced the Future of Life Institute to some extent, prompting it to jointly issue an open letter with Musk "calling for a pause in AI technology research and development.""Some people are more concerned about the collapse of civilization than the disappearance of the flesh
In the imagination of most people, the so-called "AI apocalypse" should be when AI takes over human production workshops, producing a continuous stream of robot armies while fighting human soldiers under the slogan of "robots also have human rights," ultimately eliminating humans and transforming the Earth into Cybertron, the home of the Transformers.
Little do people know that the ancients said, "The superior soldier attacks the mind; the inferior soldier attacks the city." Rather than AI eliminating humans after occupying the Earth, it is more terrifying for AI to enslave human brains and use our bodies to develop new hardware for it all day, 9 to 9, 6 days a week, as a form of "exercise."
Jacob Ward, a senior behavioral science researcher at Stanford University and former editor-in-chief of Popular Science magazine, proposed the "three cycles" theory regarding the interaction between AI and humans.
First cycle: Human instincts inherited from ancient times. Jacob believes that the first cycle is the most crucial one because humans' thinking patterns are inherently biased and have blind spots, and humans tend to follow their instincts in daily actions. During subconscious execution, our original biases will also be strengthened and then passed down from generation to generation.
Internet AI, Generative Search Engine, AI Technology, Digital World