Declare War on AI? A Molotov Cocktail Targets Altman's Residence

04/13 2026 486

OpenAI CEO Sam Altman's home was attacked with a Molotov cocktail!

At 3:45 AM today, a Molotov cocktail was hurled at Altman's residence in San Francisco. Fortunately, it failed to ignite a fire, and no injuries were reported. Just over an hour later, a 20-year-old suspect appeared at OpenAI's headquarters, threatening to set the building ablaze before being apprehended by local police.

The motives behind the suspect's actions remain unclear at this time. Later, Altman took to X (formerly Twitter) to share his reaction, stating that he awoke feeling both enraged and fearful. He had been cautioned that publishing a provocative article amidst heightened concerns about AI could place him in greater peril. Initially dismissive of the warning, he now realizes, after being jolted awake in the middle of the night, that he had underestimated the influence of words and narratives.

After drafting the article in the early hours, Altman hesitated before posting it. In the piece, he also shared a photo of his partner and son, expressing hope that it would "deter the next individual—regardless of their opinion of me—from launching another Molotov cocktail at my home."

This incident prompted Altman to engage in unprecedented introspection, openly sharing his anxieties (the full text of the open letter is attached below). When a 20-year-old declares war on an AI titan, what I witness is humanity's panic and division in the face of AI.

Fear is Real: AI Accelerates Societal Fragmentation

In his open letter, Altman conceded, "Fear and anxiety surrounding AI are justified. We are witnessing one of the most significant social transformations in recent history, if not ever."

And these fears are not unfounded.

In 2025, Oracle's net profit surged by 19%, yet it laid off 20,000-30,000 employees. Meta generated $37.7 billion in the first three quarters but still laid off 16,000 workers. Microsoft laid off 15,000, Amazon conducted two rounds of layoffs totaling 30,000, and Salesforce laid off 5,000, with 4,000 customer service roles being directly replaced by AI.

Layoffs, once indicative of a company's decline, now occur even when companies are thriving—the more profitable, the more layoffs, becoming the new norm.

The U.S. tech industry witnessed 620,000-660,000 total layoffs from Q1 2023 to Q1 2026. But an even more alarming figure is that U.S. IT jobs shrank by a net of 171,000 from 2023-2025, marking the first-ever two-year decline. Entry-level software engineer roles plummeted by 73.4%.

Altman refers to AI as the "ultimate tool." However, for those being replaced, tools can have a darker side: they can become weapons.

This reminds me of the 19th-century Luddites—British textile workers who smashed machines, believing they would steal their jobs. History repeats itself: coachmen throwing stones at cars, workers smashing looms, taxi drivers sidelining self-driving cars. Now, someone has thrown a Molotov cocktail at an AI CEO.

The difference? Machines then replaced physical labor; AI now replaces mental labor—programmers, data analysts, creators, customer service representatives, even doctors and lawyers.

In economics, there's a term: "Engels' Pause." During the early Industrial Revolution, British productivity soared, but workers' real wages stagnated for nearly 60 years. Why? The benefits of growth went to capital owners, not laborers.

Could history repeat itself? Altman writes, "Power must not concentrate. Future control belongs to humanity and its institutions." But reality is moving in the opposite direction—wealth is concentrating faster than ever in the hands of a few: those controlling computing power and tokens.

NVIDIA's market cap once exceeded $5 trillion. Jensen Huang's net worth reached $144 billion, surpassing Buffett. Global AI funding topped $200 billion in 2025 (Goldman Sachs). Startups like Zhipu and MiniMax surpassed most internet giants in market cap within three months of listing, despite negligible revenue.

Altman claims AI will benefit everyone, but for now, the "prosperity list" is short. "AI enriches some first" is the reality.

AI Doesn't Just Cause Unemployment—It Renders People 'Obsolete'

The claim "AI will replace your job" is inaccurate. More precisely: AI is making you "obsolete" in your current role.

Anthropic's report, AI's Impact on the Labor Market: A New Metric and Early Evidence, reveals that jobs with over 50% AI displacement include computer programmers (74.5%), customer service reps (70.1%), data entry clerks (67.1%), and market research analysts (64.8%).

Note: These aren't factory workers—they're office-based, educated white-collar professionals.

A harsher reality: AI now handles over 50% of code generation. Entry barriers for junior programmers are collapsing. Tools like GitHub Copilot and Cursor multiply coding efficiency. Companies need fewer juniors—just a few seniors plus AI.

The result? More people lack stable jobs or remain at the bottom of the value chain. In 2000, "English, driving, computers" were considered 21st-century survival skills—now a joke. When I graduated in 2010, computer science was the "hottest" major. Today, the tide has turned. I studied software engineering at Nanjing University. Honestly, if I coded now, I'd fear unemployment tomorrow—I'm over 35.

This isn't the end.

Altman says, "Human demand for AI is virtually unlimited." But unstated is: the more AI does, the less humans do. This isn't pessimism—it's economic basics. When a production factor's marginal cost nears zero, it displaces positive-cost factors. (Of course, "humans aren't production factors—they're the most dynamic element in productivity.")

In the future, with embodied AI, AI won't just stay on screens—it'll enter factories, roads, hospitals. Self-driving cars will replace drivers, robots blue-collar workers, AI doctors community clinicians.

Altman writes, "AI will be the most powerful tool ever for expanding human potential." True. But tool ownership determines who benefits.

Who Owns Computing Power Owns Future Influence

AI-era wealth distribution follows a new pyramid.

Level 1: Computing Power Owners. NVIDIA's market cap once exceeded $5 trillion, rivaling the world's 20th-largest economy. Huang's wealth surged by $29 billion in a year. Behind this? Computing power, the "oil" of AI, is monopolized by a few.

Level 2: Token Owners. Zhipu earned $105 million in revenue in 2025 but lost $680 million, yet its market cap exceeded HK$300 billion. Why? Capital bets on AI's "gold"—tokens. Whoever masters advanced models and AI applications holds the shovels to mine gold.

Level 3: AI App Users. ByteDance's explosive growth is driven by its AI-powered algorithm empire. Douyin and TikTok are massive AI recommendation engines, enabling ByteDance to dominate short drama, music, novels, news, search, e-commerce, and services. Apps like Doubao, Volcano Engine, and Jimeng are ByteDance's AI "side quests"—future growth curves and infrastructure. AI made ByteDance one of the world's most valuable unlisted tech firms.

The result? AI deepens fears of unfair distribution.

The World Economic Forum reports AI could eliminate 80 million jobs globally while creating 97 million. More new jobs than lost—good news? The issue is skill mismatch. A laid-off customer service rep can't easily become an AI engineer.

The OECD's 2024 report, The Future of AI: Risks, Opportunities, and Policy Priorities, argues AI will worsen income inequality, especially in "skill-biased technological change" countries. AI is a "winner-takes-all" game—a few with computing power, algorithms, and data reap most gains; most see diluted labor value.

Altman writes, "AI must empower individuals." But not everyone can or will be empowered. Two decades ago, the "digital divide" was a concern. Now, the "token divide" is the new frontier.

AI's 'Side Effects': Anxiety, Helplessness, and Anti-Tech Sentiment

AI affects not just wallets but mental health.

A review in Current Opinion in Psychiatry notes AI chatbots alleviate some anxiety and depression but introduce risks like "emotional dependency" and "parasocial relationships." Users may rely on AI unhealthily, treating it as a friend, partner, or therapist—not a tool.

This sounds like sci-fi, but it's happening. Users spend hours daily with AI, sharing secrets, then feel lonelier in reality. "A Ningbo high school girl treated AI as her 'soulmate,' chatting until midnight, dozing in class, even dropping out to be with AI" is a news story, not a joke.

Altman writes, "I understand anti-tech sentiment. Technology doesn't always benefit everyone." Light words, heavy implication—when tech outpaces human adaptation, backlash is inevitable.

Musk, at the 2026 Abundance Summit, was blunter: "AI has a 20% chance of destroying humanity." He admits it's high but prefers to "see the ending while alive than age boringly."

This sums up the elite-ordinary divide. For Musk and Altman, AI is thrilling; for the 20-year-old with the Molotov cocktail, it's a scapegoat for existential dread.

Altman reflects, "Words and narratives have power." That inflammatory article may have been the final straw. The real straw? Long-term insecurity—watching AI do more while humans do less.

A new "alienation." Marx said workers were alienated by machines; now, humans are alienated by AI—which outperforms individuals in writing, painting, coding, reasoning, and memory. Many feel "useless."

When AI becomes omnipotent, where do humans fit? No answer exists.

Altman's Latest Solution: Democratization, Inclusivity, and the 'One Ring Dilemma'

Altman's open letter proposes three keys: democratization, inclusivity, and risk awareness.

On democratization: "AI must democratize. Power can't concentrate. Future control belongs to humanity and its institutions. AI must empower individuals; we must collectively decide our future and rules."

Ironically, OpenAI's name includes "Open," yet its advanced GPT-5 and GPT-6 models are closed. Users access via API, but core tech and data remain secret. True "open" players are Google, Meta (Llama series), and China's Alibaba, Zhipu, and Kimi.

Altman acknowledges the dilemma. He writes candidly:

"Once you've seen AGI, you can't unsee it. It carries a real 'One Ring' dynamic—driving people to madness. I don't mean AGI itself is the ring, but the philosophy of 'being the one to control AGI' is totalitarian."

This is the letter's most striking line.

The "One Ring dilemma"—the more power, the greater the temptation, the harder to relinquish. Altman says the solution is "sharing tech widely, so no one owns the ring." But OpenAI's current approach seems to "compete for the ring." By the way, Altman's lawsuit with Musk over OpenAI is nearing trial.

He says "laws and norms will change," but they lag far behind tech iteration.

Still, Altman does one thing: admits the problem.

He writes, "Much criticism of our industry stems from genuine fear of this tech's risks. That's valid; we welcome well-intentioned critique and debate."

Willingness to talk beats burying heads in the sand.

What Can Ordinary People Do? Embrace AI or Be Left Behind

Enough macro talk. What's practical?

The AI wave won't halt for Molotov cocktails. What should ordinary people do?

Only one answer: embrace AI.

Altman writes, "AI will be the most powerful tool ever for expanding human potential." Giants want AI to empower individuals, even giving free tokens (like subsidy wars). But the key is whether you'll use this tool.

AI expands individual potential. A coder using AI is 5x more efficient. A designer using AI can do a team's work alone. An analyst using AI spots opportunities others miss. Simply put: smart get smarter, driven get driven, creative get more creative.

Conversely, those who cannot or will not use AI may find themselves lagging behind their peers in terms of logical thinking, information processing capabilities, and problem-solving efficiency. This is not an issue of individual intelligence but rather one of tool utilization—just as one who cannot use a search engine will fall behind in the information age, leading to the so-called 'information divide.'

The "AI capability divide" is already emerging. Unless you own a factory or a mine, it's advisable to start using AI as soon as possible.

AI also holds an undervalued advantage: it can maximize the leverage of your existing real-world resources.

For instance, you can use AI to assist in writing a business plan, thereby leveraging financial resources to start a business; you can use AI to maintain customer relationships, thereby leveraging your network resources. The most extreme form is the 'one-person company.' AI lowers the barriers to all attempts—creating content, products, services, and sales. With just one person and a stack of AI tools, a complete business loop can be achieved, which was unimaginable five years ago.

But there's one crucial point: don't be the one throwing Molotov cocktails.

Altman wrote at the end of his open letter: 'As we engage in debate, we should strive to lower the temperature of our rhetoric and tactics, working to ensure fewer families are harmed by explosions—whether metaphorical or literal.'

The author of that inflammatory article created fear and anxiety about AI through words. The young man who threw the Molotov cocktail expressed his anger through actions. But neither words nor Molotov cocktails can solve the problem.

What can solve the problem is using AI.

Some say AI will make people dumber because relying on tools weakens thinking abilities. This is backwards—AI won't make you dumber; it just changes the definition of 'smart.' Previously, being smart meant how much knowledge you could remember or how fast you could calculate; in the future, being smart will mean whether you can ask good questions and use AI effectively.

Altman said in his open letter: 'Adaptability is critical. We are all learning new things at breakneck speed; some of our views will be right, and some will be wrong. As technology advances and society evolves, sometimes we need to change our minds quickly.'

This sentence is the most practical one in the entire letter.

No amount of 'Molotov cocktails' can stop the tide of AI.

AI will not halt its progress due to the wrath of a single individual.

At the conclusion of his open letter, Altman stated: "I firmly believe that technological advancements hold the potential to create an incredibly bright future—for your family and for mine."

This assertion carries a tinge of idealism. Technology is inherently non-neutral; it has the power to enhance the future while also exacerbating present-day challenges. However, one thing remains undeniable: technological progress is irreversible.

We cannot halt the march of AI, but we can certainly shape its utilization. Will you be the one hurling Molotov cocktails, the one burying your head in the sand, or the one embracing the potential of AI?

The choice lies solely with you.

Enclosed is the original blog post by Sam Altman, translated by Google Gemini.

Statement and Reflection

This is a cherished family photo of mine. They mean the world to me.

Images possess a unique power, I believe. We typically prefer to keep our personal lives private, but in this instance, I've chosen to share this photo in the hopes of dissuading others—regardless of their opinions of me—from resorting to violence, such as throwing Molotov cocktails at my residence.

The first incident occurred at 3:45 AM yesterday. Fortunately, the projectile bounced off the house, and no one was injured.

Words, too, wield significant influence. A few days ago, an inflammatory article about me surfaced. Yesterday, someone informed me that, given the heightened anxiety surrounding AI, the article may have placed me in greater peril. At the time, I didn't pay much heed.

Now, I find myself awakening in the middle of the night, consumed by anger, and realizing that I had underestimated the power of words and narratives. I believe this is an opportune moment to address several issues.

First, regarding my beliefs:

Benefiting all humanity: I am morally obligated to promote prosperity for all, empower individuals, and advance scientific and technological progress.

AI as the ultimate tool: AI will emerge as the most potent tool humanity has ever possessed for expanding our capabilities and potential. The demand for this tool is virtually limitless, and people will leverage it to create remarkable things. The world deserves an abundance of AI, and we must find a way to make it accessible.

Confronting risks head-on: The path ahead will not be smooth. Fear and anxiety surrounding AI are justified; we are witnessing one of the most significant social transformations in history. We must prioritize safety, which extends beyond fine-tuning models—we urgently require a societal-wide response to defend against emerging threats. This includes formulating new policies to guide us through a challenging economic transition toward a brighter future.

Democratizing AI: Power must not be overly concentrated. The future belongs to all of humanity and its institutions. AI should empower individuals, and we must collectively determine our future and establish new rules. I believe it is incorrect for a select few AI labs to make major decisions about the future.

Adaptability is crucial: We are all learning at an unprecedented pace; some of our views will be correct, while others will not. As technology advances and society evolves, we must sometimes rapidly change our perspectives. The impact of superintelligence is not yet fully understood, but it will undoubtedly be profound.

Second, some personal reflections:

Reflecting on my first decade at OpenAI, I take pride in numerous accomplishments, as well as acknowledge many mistakes.

I recall the looming lawsuit with Elon (Musk) and how I stood my ground, refusing to cede unilateral control over OpenAI to him. I am proud of that stance, as well as for navigating the treacherous path to keep OpenAI afloat and achieve all that has followed.

What I am not proud of is my tendency to avoid conflict, which has caused immense pain for both myself and OpenAI. I am not proud of how I handled the conflict with the former board, which plunged the company into chaos. I have made numerous other mistakes amid OpenAI's frenetic growth; I am a flawed individual at the center of an exceptionally complex situation, striving to improve each year, always working toward the mission. We are acutely aware of the high stakes involved with AI and how personal disagreements among well-intentioned individuals can be amplified. However, experiencing these painful conflicts firsthand and often having to arbitrate them is another matter entirely, and the toll is heavy. I apologize to those I have hurt and wish I had learned faster.

I am also keenly aware that OpenAI is now a large platform, no longer a startup, and we must operate in a more predictable manner. The past few years have been incredibly intense, chaotic, and stressful.

But what I am most proud of is that we are fulfilling our mission, which seemed nearly impossible at the outset. Against all odds, we have figured out how to build powerful AI, secure sufficient funding for infrastructure, establish a product company and business model, and provide reasonably safe and robust services at scale.

Many companies claim they want to change the world; we have actually done so.

Third, thoughts on the industry:

My personal takeaways from the past few years and my perspective on the "Shakespearean drama" that has unfolded among companies in our field can be summarized in one point: "Once you've seen AGI (Artificial General Intelligence), you can't unsee it." It carries a real "One Ring to rule them all" dynamic that drives people to do crazy things. I'm not saying AGI itself is the ring, but rather the totalitarian philosophy of 'being the one to control AGI.'

The only solution I can envision is to broadly share this technology, so that no one can possess that "ring." The two obvious ways to achieve this are empowering individuals and ensuring democratic systems maintain control.

It is critical that democratic processes be more powerful than companies. Laws and norms will evolve, but we must operate within democratic frameworks, even if it is messy and slower than we would like. We want to be voices and stakeholders but should never hold all the power.

Much of the criticism directed at our industry stems from genuine concern about the technology's extremely high risks. This is entirely reasonable, and we welcome well-intentioned criticism and debate. I understand anti-technology sentiment; obviously, technology does not always benefit everyone. But overall, I believe technological progress can create an incredibly bright future—for your family and for mine.

As we engage in debate, we should strive to lower the temperature of our rhetoric and tactics, working to ensure fewer families are harmed by explosions—whether metaphorical or literal.

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.