02/02 2026
544

Recently, a rather amusing news story surfaced. On January 25th, it was revealed that Sochi, a well-known club in the Russian second-tier league, disclosed the reason for dismissing its former coach, Robert Moreno: he had delegated his responsibilities to AI.
This ex-coach of the Spanish national team seemed to be the kind of early adopter of AI that everyone online envies. As an avid user of ChatGPT, he relied on AI to devise the team's training plans and strictly adhered to them. When ChatGPT demanded that players wake up at 5 a.m. and commence training at 7 a.m., two days prior to a match, he complied. When ChatGPT suggested that the team should remain awake for 28 hours straight, he followed suit. The players were likely left bewildered—what was the purpose of all this? Was AI testing them for potential future human enslavement camps?

Even for major decisions, such as player acquisitions, Robert Moreno relied on AI. Consequently, the forward selected by ChatGPT failed to score a single goal in ten games. This AI-believing avant-garde coach also left the team in disgrace, having secured just one point in seven games.
In recent years, our focus has often been on "what else can AI do?" and "AI is on the verge of replacing us." This has led to a widespread narrative that "AI is amazing." In reality, however, we are beginning to witness people being laid off due to AI replacing them.
Yet, on the flip side, many who eagerly embraced and placed their trust in AI's capabilities have also found themselves out of a job.

Last year, during a large model seminar I attended, I discovered that the top professional application scenario for many mainstream large models in China was surprisingly consistent: reviewing contracts and legal complaints. With the aid of large models, many legal professional barriers have indeed been significantly lowered. However, if you are a legal professional yourself, it's advisable to use AI with caution.
In 2023, the BBC reported a case where, in a personal injury lawsuit involving an airline in New York, the plaintiff's lawyer was penalized, and the law firm was held jointly liable. The reason? This senior lawyer, with thirty years of experience, probably thought such straightforward work wasn't worth his personal effort, so he let ChatGPT generate the legal complaint.

Then, the notorious large model hallucination phenomenon kicked in. The judge discovered that among the cases submitted by the plaintiff's lawyer, six judicial decisions were AI-fabricated, complete with false citations and internal references. The judge, encountering AI-generated fakes for the first time, must have been bewildered—not only did you include one fake case, but all six were fake. Was this the downfall of technology or the collapse of law? Let's see how I deal with you.
Subsequently, the legal community found that no matter how rigorous the industry, it couldn't deter practitioners who sought to cut corners. News of AI-generated complaints suffering from large model hallucinations continued to emerge, and the European and American legal profession truly experienced a surge of 'intelligence.' Similar cases have even surfaced in China, where the Beijing Lawyers Association has explicitly mandated that legal documents must undergo cross-review by three levels of lawyers and be labeled as 'AI-assisted generation, manually reviewed.'

When it comes to AI replacing jobs, designers and programmers are often the first to come to mind. However, upon closer inspection, these are also the jobs best suited for leveraging AI—as long as you don't make it too obvious.
Last year, there was a case where a senior programmer in Silicon Valley relied entirely on GitHub Copilot to generate code, believing that using AI to do his job was the superior choice and his right. Given that this programmer was quite senior and nearing retirement, the company initially didn't want to provoke him. But then he went too far, starting to refuse to participate in any work personally. For instance, he would argue that why should he fix bugs in AI-generated code? Other colleagues would retort that it was his responsibility. He would reply, 'My job is to delegate it to AI.' You could say he achieved logical closure.

Although he relied entirely on AI for work, he was very adept at switching positions when pressuring his subordinates. He would intimidate new hires, warning them that they had to work hard, or AI would soon replace their positions.
At this point, I imagine his boss was also bewildered. "Oh, AI can replace the new hires' positions, but you're using AI to do your job. Then why don't I just replace you with AI?" So the company fired him for 'affecting project progress' and 'causing psychological harm.'
Coincidentally, last year we interviewed a Silicon Valley programmer. Maybe as Chinese, we weren't so bold. He quietly used AI to do his work, then sat at his desk drinking coffee and watching dramas. But this dreamy life didn't last long—once the company found out, they fired him too.

The last story is a bit niche and unusual, but I think it's worth sharing.
First, we need to discuss some background: in 2025, Silicon Valley experienced a major round of layoffs, also known as the first AI-driven large-scale job reduction in human history. When analyzing the laid-off positions, we found that the most affected weren't programmers and designers but product managers.
For example, Amazon announced that starting in October 2025, it would lay off 30,000 employees. The position with the highest number of layoffs was PM (product manager), followed by TPM (technical project manager) and SDM (software development manager). Roles driven by cross-departmental negotiation and product communication skills became the first targets in the AI transformation.

Almost at the same time, I ran into a friend at an exhibition whose experience confirmed the perilous position of the product manager role. He said he strongly believed in AI technology and was convinced that AI was the future. So, as a product manager at a software company, he was the first to propose the necessity of AI transformation in the company's main group. Later, the boss took it very seriously and listened to his reports twice alone. As his +3 boss (three levels above), he usually couldn't communicate directly. So during that time, he firmly believed that learning and understanding AI had brought him opportunities.
"What happened later?" I asked.
My friend looked a bit lost and hesitated for a moment before saying, "Later, I was laid off. The entire department was laid off." He heard that after several meetings with shareholders and external think tanks, the boss concluded that in the AI era, the product department was the least needed, and future products would definitely be designed by AI, not humans. So he was fired, but before leaving, the boss had his secretary give him a gift, saying that if he needed anything in the future, the company wouldn't forget him.
Hearing this, I was bewildered. This was the most darkly humorous AI story I'd ever encountered. So AI is a boomerang that can hit me too? I steadied myself and told him, "Leaving such a company is a blessing in disguise."

They say that believing everything in books is worse than having no books, so today, believing everything in AI might not be such a good idea either.
AI is indeed full of imagination, and the information field is exaggerating its capabilities and effects. But when we really let go of the steering wheel and let AI drive us in the workplace, we might often go through a series of steps: first, "Hey, it actually works!"; second, "Since AI can drive, why don't I just take a nap?"; third, "The car crashed."
AI is far from being as wonderful as imagined. Blindly believing in it often leads us to ignore AI's mistakes, like the football coach; ignore teamwork and respect for others, like the programmer; or be so swayed by "AI" that we lose touch with reality, like the product manager's boss.

A key issue is that AI won't take responsibility for its mistakes. Ultimately, we humans have to bear the consequences.
If we must forcefully summarize something from these stories, perhaps it's this: reliability is the greatest virtue of humans in the 21st century. To remain competitive as humans in the AI era might be simple: just be reliable enough and find a reliable company.
