04/16 2026
395

Prioritizing People: Hassabis Stands in Contrast to Altman
Written by | Lan Dong Business Zhao Weiwei
By 2026, OpenAI's market share is on the decline, while Google's Gemini and Anthropic's Claude are gaining momentum.
The recent release of Demis Hassabis: The Brain Behind Google AI provides firsthand insights into the essence of Google AI for the Chinese-speaking world. Author Michael Lewis engaged in two-hour conversations with Hassabis, ultimately compiling a total of 30 hours of dialogue and research into a comprehensive biography of Hassabis.
The book not only delves into Hassabis's inner world and growth trajectory but also reveals how he steered DeepMind on a development path distinct from OpenAI's. DeepMind's approach is centered on science and follows a cautious development path, while OpenAI embodies a corporate culture of rapid action and breaking conventions.
Most notably, the book portrays Hassabis as a hybrid figure, an "entrepreneur + scientist." For this group of early AI pioneers, including Hassabis, the starting point in constructing AI is scientific exploration, seeking a new philosophy of cognition. In contrast, OpenAI's Altman represents another group that sees AI technology as a means to gain power and wealth.
Understand People First, Then Understand the Issues
Hassabis grew up in an immigrant family in London, became a chess prodigy at the age of five, later studied neuroscience at Cambridge. After founding DeepMind, he achieved remarkable feats such as defeating human Go champions with AlphaGo, solving protein folding with AlphaFold, and winning the Nobel Prize in Chemistry. His ultimate vision for AI is an infinite machine capable of infinitely generalizing data and ascending into mathematical AGI.
Looking back over the past fifteen years, Google has been a driving force in AI.
Currently, 90% of key breakthroughs in the global AI industry have been achieved by Google Brain, Google Research, or DeepMind, including milestones like AlphaGo, reinforcement learning, and the Transformer architecture. According to Hassabis's vision, AI should progress along a steady research and development path, with more AlphaFold applications in curing cancer, new energy, and new materials to benefit humanity.
However, reality deviated from the script. Technological development is unpredictable. OpenAI took the lead in bringing the conversational and generative capabilities of large models to the public. Initially lagging, Google caught up by reorganizing, creating Gemini 3.0, and eventually rising to 21.5% of the global general AI traffic market share, while ChatGPT's share fell from 86.7% to 64.5%.
Hassabis is one of the core minds unlocking all of Google's AI capabilities.
Hassabis Stands on the Opposite Side of Silicon Valley
Demis Hassabis: The Brain Behind Google AI unveils many behind-the-scenes stories of tech history.
For instance, when Google acquired DeepMind, Facebook's Mark Zuckerberg offered a higher price, but Hassabis chose Google because he valued people over price. Zuckerberg did not demonstrate a long-term understanding of AGI, while Google co-founder Larry Page did.
"He essentially told me that while I could start a company like Google, it would consume the most precious years of my career," Hassabis recalled of Larry Page. "But if my true mission is to build artificial general intelligence, then why not leverage all the resources he has accumulated? I found that argument very compelling."
In late January 2014, Google acquired DeepMind for $650 million, a deal that seems like a bargain by today's standards.
But Hassabis's real gain was in the following decade, as Google invested billions of dollars in DeepMind's research. Hassabis's pursuit of superintelligence, which he had harbored since his teenage years, quickly entered a stage of rapid development.
Later, Hassabis was at the core of Google's leadership but chose to live in London rather than Silicon Valley. He believed the UK was more egalitarian than Silicon Valley and was unwilling to be completely assimilated by Silicon Valley's profit-driven culture.
Although Hassabis engaged with Silicon Valley and accepted its funding, he remained on the periphery, criticizing corporate leaders who prioritized profit, speed, and market dominance as too "short-sighted and profit-driven" in his conversations with Lewis.
In fact, DeepMind also had conflicts with Google over power struggles. Internally, DeepMind was dissatisfied with Google's control over AI governance and safety oversight, fearing that future AGI deployment would be driven solely by commercial profits rather than safety and ethics. They hoped to establish a governance mechanism for DeepMind independent of Google's commercial board.
However, this secret spin-off plan, named "Project Mario," ultimately failed, as Google could not accept external independent individuals having veto power over its core proprietary technology. DeepMind co-founder Suleyman left as a result and later became the head of Microsoft AI.
This is the first complete disclosure of the internal power struggle between DeepMind and Google.
This also drove Hassabis's growth, as he began to transform from an idealist into a realist. He knew that overly idealistic trustless governance structures would not work in a for-profit company. The only realistic path was to gain actual power within the company.
More importantly, in the biography, Hassabis becomes the antithesis of Altman. The purpose of AI is scientific enlightenment, not power and wealth. This is a crucial starting point for observing Hassabis and Altman.
Altman is often simplified into an AI leader pursuing power and wealth, while Hassabis is a scientist + entrepreneur. This is not just a difference in public image but a fundamental divergence in underlying values and development logic.
Altman understands the survival rules of Silicon Valley well, and being first to market ahead of competitors is a survival-level requirement. The clearest manifestation of Altman's desire for power is what his mentor Graham said: "Sam is extremely skilled at acquiring power. You could drop him on a cannibal island, and when you come back five years later, he'd be the king."
This also explains why DeepMind once lagged behind OpenAI in large language models, as Hassabis insisted on a "science-first" neuroscience path and remained skeptical of large language models. He did this for the sake of knowledge and science.
The Gap at the Top Widens, and Open Source Always Lags a Generation
In recent interviews, Hassabis has made numerous judgments about the industry landscape and future development, such as open-source models always lagging a generation behind, the competitive gap at the top widening, and the future industry needing an international organization for cooperation.
He believes that the gap between the current global top three to four leading labs and all other institutions is widening in ways that did not exist two years ago. The reasons are structural, not just about throwing more money.
"Scaling laws, the Transformer architecture, and the reinforcement learning from human feedback pipeline"—these current technological ideas have been widely disseminated and replicated. The marginal returns on these approaches are declining, and institutions that only skillfully apply known technologies are gradually hitting growth ceilings. What will determine success next is who can invent a new generation of algorithms, not who can execute existing technologies most efficiently.
Moreover, AI tools themselves are accelerating the research and development of the next generation of systems. Code assistants speed up researchers' iteration of architectures; mathematical reasoning tools help prove the properties of new models. Leading labs are not just running faster on the same track—they are changing the track itself.
"It's becoming increasingly difficult to extract the same benefits from the same technological paths. Therefore, I believe that in the coming years, labs capable of inventing entirely new algorithmic approaches will gain a greater advantage—because the potential of the previous generation's approaches has been thoroughly exhausted."
On the issue of open source, Hassabis is cautious but clear: open-source models will always lag about a generation behind the cutting edge.
Not because the open-source community lacks talent or commitment, but because catching up takes time. When a frontier lab publishes a breakthrough, the open-source community needs about six months to replicate, deeply understand, and cleanly implement it. During those six months, the frontier lab does not stand still.
The gap does not close; it only moves forward.
DeepMind's response to this is its own Gemma series of models: lightweight open-source models aiming to be the strongest at their scale, rather than stubbornly pursuing the absolute frontier.
So, Hassabis clearly defines the users of Gemma: early developers without large-scale computational infrastructure, academic researchers with limited computational resources, startups that do not want to rely on APIs, and edge computing scenarios that must run locally. He believes that for these scenarios, open-source models are not compromises—they are often the more appropriate choice.
The key is that people must be clear-eyed about their positioning and boundaries.
A more important question is, when discussing which aspects of the current AI industry are lagging behind earlier expectations, Hassabis believes that continual learning—these systems stop learning after training is complete and deployment in the real world begins—shows weak capabilities in incremental learning and continuously adapting to new knowledge.
"Because the industry has not yet found a fully viable solution, all leading labs are researching how to integrate new knowledge into mature systems that have already been trained for months. The human brain does this extremely naturally, probably through mechanisms like sleep and reinforcement learning."
Moreover, current AI development still has many problems to overcome, the biggest of which may be consistency. "I sometimes call these systems 'jagged intelligence' because they perform amazingly when asked in a specific way, but a slight change in questioning can lead to errors on basic questions. General intelligence should not have this flaw."
How Did Google Turn the Tide?
Organizational change precedes product change. The reshaping of organizational capabilities leads technological transformation. This is the main reason Google's Gemini was able to catch up.
In mid-November last year, Google released the Gemini 3.0 model, which brought significant pressure to ChatGPT due to its strong performance in reasoning and multimodal capabilities. A very important reason was that earlier, Google DeepMind and Google Brain, two AI teams, merged, with Hassabis becoming the head of the new combined department, while Google Brain head Jeff Dean served as the chief scientist of the new team.
Team changes are the foundation of all changes. Hassabis later said, "We integrated the company's global talent to move in the same direction; at the same time, we concentrated all computational resources to build the largest-scale models, rather than making two or three versions dispersed within the company.
"To a large extent, we just brought together the various strengths we already had and then sprinted with the extreme focus and rhythm of a startup, allowing us to return to the frontier and take the lead in multiple areas."
In fact, the outside world did not favor Hassabis at the time because Google Brain head Jeff Dean had more extensive product experience.
But Hassabis proved himself because the outside world only saw him as a scientist leading AlphaFold and overlooked his gaming career. "If you ask me to make truly innovative products, I'm very willing—that's what I tried to do in the gaming industry, where every game was based on revolutionary technology."
Overcoming the "innovator's dilemma" aboard Google's battleship is no easy feat. It requires maintaining long-term, free scientific research while relentlessly pushing results to market, iterating, and releasing them at a nearly ruthless pace.
Hassabis replicated the "commando" model from the gaming industry, where all members jointly develop a unified model; anyone can propose improvements, but only those that enhance the model's performance on the leaderboard are adopted; everything is data-driven.
The outside world saw the success of Gemini 3.0, but in fact, as early as September 2024, Google used this approach to form a counterattack team to defend against OpenAI's release of the o1 reasoning model.
At the time, Google legendary engineer Noam Shazeer (one of the core inventors of the Transformer architecture, who had been brought back to Google from the outside for $2.7 billion) and Jake Raich co-led the project. At the preparatory meeting, over 250 scientists attended, each bringing only one page of PPT. Originally, only 40 volunteers were planned to be recruited, but in the end, 150 people voluntarily signed up. The atmosphere was: "This is RL (reinforcement learning), this is DeepMind, we must nail it!"
The gap continued to narrow. By the fall of 2025, Gemini 2.5 Pro and OpenAI's GPT-5 performed very closely in multiple blind test duels, with Gemini often having the advantage in long-context and multimodal tasks, while GPT-5 was stronger in mathematical reasoning tasks.
If one keyword could summarize Hassabis's management philosophy, it might be relentless. "Relentless progress, relentless releases. A relentless innovation production machine. It's almost an oxymoron—can you have a continuously iterating innovation production engine? I think you can."