Microsoft CEO Nadella's Latest 10,000-Word Interview: In the AI Era, Having the Right Paradigm Doesn't Guarantee Winning

11/23 2025 339

Author Lin Yi

Editor Key Focus

Recently, Microsoft CEO Satya Nadella had an in-depth conversation with Stripe co-founder John Collison about AI technology, the essence of business, and organizational evolution. Nadella made a sober comparison between the current AI wave and the internet bubble of the 1990s, arguing that this AI wave is not a bubble but a genuine Capacity crisis (capacity crisis) with demand for computing power outstripping supply.

In the interview, Nadella painted a counterintuitive vision of the software future: the boundaries of applications are dissolving, and Integrated Development Environments (IDEs) will return in a completely new form. He believes that future interaction interfaces will no longer be single chat windows but task control centers that integrate spreadsheets, documents, and message streams. In this vision, whether programmers, accountants, or lawyers, everyone will have their own IDE, and the nature of work will shift to the micro-steering of thousands of AI agents. This is not just a UI innovation but a reconfiguration of the human-machine collaboration relationship, where humans are no longer mere operators but commanders with macro-delegation capabilities.

During the interview, Nadella emphasized that corporate sovereignty is a core value in the AI era. He argued that with general-purpose large models knowing everything, a company's moat is no longer just traditional intellectual property but transforming internal tacit knowledge into the weights of private models. If Bill Gates dreamed 30 years ago of structuring the world into an SQL database to make information accessible at one's fingertips, today Nadella is using neural networks and agents to build a company's unique tacit knowledge, preventing core advantages from leaking into general-purpose models.

At the end of the interview, Nadella dissected Microsoft's organizational culture. He is committed to thoroughly reshaping Microsoft's cultural core from a 'Know-it-all' mentality to an open 'Learn-it-all' approach. In his view, to combat external stereotypes and internal bureaucracy, Microsoft must possess a unified 'growth mindset' to navigate every technological paradigm shift.

Highlights from Satya Nadella's Interview

1. The Truth About Enterprise AI: Don't Envy Others' Factories, Build Your Own Data Moat

Reject Model Anxiety: Nadella states that a company's core task is not to envy others' AI agents but to build its own AI factory. The most complex and crucial work is organizing the data layer to enable enterprise data to meet intelligent demands.

Memory and Association: The real killer app lies in establishing a 'Graph.' Work is not chaotic but revolves around business events. The value of AI lies in recovering these semantic connections lost in systems.

Three Pillars of an Agent: An effective agent system must have three elements beyond models: memory (long-term credit assignment), permissions (strict adherence to access restrictions), and an effective action space.

2. Redefining 'Corporate Sovereignty': Weighting Tacit Knowledge

The AI Version of Coase's Theorem: Since general-purpose large models know everything, what is the meaning of a company's existence? Nadella argues that a company's value lies in its internal transaction costs being lower than market transaction costs—its 'tacit knowledge.'

Future IP is Model Weights: 'Corporate sovereignty' means a company owns its foundational model, which captures the organization's unique tacit knowledge. Future intellectual property will exist in the form of LoRA (Large Model Fine-Tuning Layer) weights, a key to preventing core advantages from leaking into general-purpose models.

3. Infrastructure Construction: This Time, It's Not a Bubble but Capacity Hell

Difference from the 2000 Bubble: Nadella points out that the 2000 bubble was about dark fiber, with excessive infrastructure laid but low utilization. In today's AI infrastructure construction, all computing power resources are sold out, with bottlenecks in power supply, turbines, and 'enhanced enclosures.' Facing increasingly strict data regulations, Microsoft must build data centers globally to meet countries' data sovereignty requirements.

Technology Stack Layout: Nadella conceptualizes Microsoft's AI stack into two core layers. The bottom layer is the infrastructure business, the 'Token Factory,' with a core metric of pursuing extreme capital efficiency—how many tokens can be produced per dollar and per watt. The upper layer is the 'Agent Factory,' with a core focus on how to most effectively utilize these tokens to drive business outcomes, maximizing the value of each token.

Rejecting the Temptation of Over-Bundling: Although Microsoft possesses full-stack capabilities from chips and cloud facilities to applications, Nadella emphasizes that each layer (infrastructure, data layer, applications) must have independent market competitiveness. He opposes excessive reliance on ecosystem bundling, arguing that customers should not be forced to accept a 'full package' but should have the right to choose 'which door to enter' Microsoft's ecosystem.

4. The Future of Software Interfaces: Everyone Will Have an 'IDE'

The Return of the IDE: Although people often say 'applications will disappear,' Nadella believes IDEs will return in a new form. Future software interfaces will be a fusion of inboxes, messaging tools, and 'blinking cursor canvases.' Not just programmers but accountants and lawyers will have their own 'task control centers' to micro-steer thousands of AI agents.

5. Historical Lessons: Having the Right Paradigm Doesn't Guarantee Winning

Microsoft's Internet Past: In the 1990s, although Microsoft saw the direction of the 'information highway,' its initial bet on 'interactive television' was defeated by the open internet. The lesson for the AI era is that even if the paradigm is correct, specific architectural choices and business models still determine success or failure.

The Inevitability of Organizational Layers: Even in open ecosystems, 'organizational layers' with discourse power (discourse power) will eventually emerge (e.g., search engines, app stores). In the AI era, who will become the new organizational layer (e.g., ChatGPT's current role) remains uncertain.

Here is the transcript of Satya Nadella's interview

1. About the Ignite Conference and Enterprise AI

John Collison: So, what should everyone be excited about at the Ignite Conference?

Satya Nadella: For us, the most core task at the Ignite Conference is to ensure that AI can be widely adopted within enterprises. If there's only one thing that matters, it's not just to envy others' 'AI factories' or AI agents but to explore how to build your own AI factory.

Organizing the data layer is crucial in this process, and it has proven to be perhaps the most complex part. You need to cover the entire enterprise's data to enable it to meet intelligent demands. This will be our focus moving forward.

John Collison: We don't seem to have seen truly in-depth applications in enterprise environments yet. Although we have Copilot, most people don't have this capability in their daily work. Do you think people are underestimating or not fully utilizing the AI that already exists?

Satya Nadella: Yes, this is very interesting. Because for me, this is precisely the killer feature. The most important thing we've done is establish the 'Graph.' In my view, this is the most crucial thing for any company beneath all databases—it carries your emails, documents, Teams calls, and other data.

The key lies in the relationships between this data. People's work is not chaotic or unstructured; all work revolves around a business event. These semantic connections exist in people's minds but are often lost in systems. Now, for the first time, we can better recover these memories and associations through AI.

John Collison: Why is the penetration rate of AI in enterprises still insufficient compared to individual users?

Satya Nadella: Because I think when people use many large language model (LLM) tools now, they may only upload a single document. But most companies haven't integrated all functions or connected the company's complete context to the AI they use daily.

In reality, there are two layers of challenges here. The first is change management. This is the fastest deployment we've had in all our office suites historically, but ultimately someone has to use it. In an enterprise environment, this means all data discovery processes must be feasible, and all data governance must be in place. We must connect permission scopes to Copilot to ensure that when I retrieve content, if it's confidential (e.g., protected by IRM), it can be correctly identified and handled. We've done a lot of work, and now we're starting to see results.

The second point I want to make is that running AI across the entire Microsoft 365 Graph is one thing, but the next challenge is to address ERP system data. Current connectors are like 'two thin straws,' not efficient enough. You need a better data architecture, basically integrating all this data into one level through semantic embedding.

John Collison: Making company data 'accessible at one's fingertips' has been a vision for decades. I read the book 'Softwar' about Oracle's history, which mentioned Larry Ellison's pitch to executives in the late 1990s to put all company data in one place so executives could get answers 'with one click' instead of sending emails to analysts for investigations. Why does this argument persist? Is it because companies haven't actually done the ' Hardship course ' (eaten their data infrastructure vegetables) on time? Are we finally going to solve the data pipeline problem this time?

Satya Nadella: You could refute (refute) this premise, but that's precisely the issue. If I remember correctly, Bill Gates coined the term 'Information at your fingertips' in a COMDEX speech in the 1990s. Bill has always been fascinated by this.

I clearly remember him saying in a review in the 1990s, 'There's only one category of software, and that's information management. You must model people, places, and things, and that's it. You don't need to do anything else because all software is essentially information management.'

This has always been Bill's dream. He hated file systems because they were unstructured. If everything were an SQL database, he would be happy because he could directly run SQL queries and program all information. For him, that was the elegant solution to make information accessible.

But the problem is that people are messy. Even if data is structured, it's not truly centralized in one index, and I can't run a single SQL query to get all information. This has always been the fundamental challenge.

I would say that's a product of the old world. Back then, none of us thought that AI and large-scale deep neural networks would suddenly become the key to solving this problem. What we need to do is not build some stylized data model but use neural networks to find those patterns. In fact, for a long time, we were obsessed with how complex data models needed to be to capture the essence of a company. It turns out that this can be achieved through massive parameters in neural networks with huge computing power.

John Collison: Like some smart remote employees who can grasp the key points by reading documents just five minutes after joining. Models can be arbitrarily intelligent; they can perform RAG (Retrieval-Augmented Generation) and access all content in an enterprise. But this is different from the model truly 'knowing' something. Unless you train a customized model internally within the company, these models won't truly become smarter about your business. If the 1000th query isn't smarter than the first, where do you think this is heading?

Satya Nadella: This involves two points. If we're talking about contextual learning or continuous learning, that is indeed the ultimate goal. This echoes my previous point: if you separate the model's 'cognitive core' from its 'knowledge,' you have a formula or algorithm for continuous learning.

I believe three things must exist outside of model runtime, and we need to overcome them:

Memory: Including short-term and long-term memory. Humans are very good at long-term credit assignment. When an AI model can both reward and punish based on long-term memory (i.e., has long-term credit assignment capability), you know you have true memory.

Permissions/Entitlements: The model must strictly adhere to the permission system at runtime. Who am I? What do I have access to? The model must meet these restrictions.

Actions: The action space must be effective.

If you combine these three—actions, permissions, and memory—this constitutes the context. By definition, these must be located outside the model rather than built into it. For example, in today's Copilot, you need the system to run between OpenAI models and Claude models. I believe this is precisely the direction frontier technologies need to develop.

2. CEO's Daily Routine and Management Style

John Collison: I'd like to ask some questions about your work style. What does your typical day look like? How do you stay informed about what's happening inside Microsoft through modern 'walk-around management'?

Satya Nadella: My typical day mainly consists of two ends. The first is customer-related matters. I have at least one or two Teams calls with customers every day. This is one of the most helpful ways to keep me grounded.

The second is meetings. As CEO, I realize that meetings mainly come in two types. One is where I just need to call everyone together and then stay silent because the act of 'calling' is what matters, and the work is either already done or will be done later. The other is more important meetings where I need to learn, make decisions, or convey information. You mentioned that you 'lurk' in Teams channels, and yes, Teams channels are almost omnipresent for me. It's like strolling through virtual corridors and lurking around those channels. If there's anything I've gained, it's that I've learned the most there. This is also where I establish the most connections. I discover, 'Wow, this person is developing an Excel Agent,' or learn about the evaluation methods they want. I learn more from this than from doing anything else.

John Collison: It's like the Microsoft Teams team focusing on their product, and then Satya suddenly jumps in with a question?

Satya Nadella: Yes, although sometimes I feel like I should have more access. Actually, my biggest gripe is that I can't just wander into all the places I want (laughs). But being able to dive right in and experience things firsthand is fascinating. It somehow normalizes this kind of communication. Now employees don't hold back from sharing their true opinions with you.

John Collison: You're well-known in Silicon Valley's inner circle for maintaining a high sense of connectedness. I remember you visiting Stripe's office when we were still a small company. Why are you more willing than most other CEOs to spend time meeting with startups?

Satya Nadella: I grew up in Microsoft's culture, and I have this 'evangelist gene' for developer relations deeply ingrained in me. Two things are deeply rooted in my mind: If you don't pay attention to where developers are going, it's hard to stay relevant as a technology platform. You need to understand new workloads to build a technology platform.

John Collison: If you don't pay attention to startups, it's hard to understand trends in platforms or workloads. So this is almost something I have to do. The other thing is, I get tremendous energy from it. I've always thought founders are like magicians, creating something out of nothing. It's like performing magic tricks—I always want to figure out, 'How exactly is that done?'

Satya Nadella: Actually, one thing we learned from you is rediscovering what Microsoft is really good at—following developers and being where startups gather.

That somehow also led me to GitHub. Obviously, GitHub is a fantastic asset, and we need to be excellent stewards of the open-source ecosystem. But more importantly, every startup's repository is on GitHub. For us, being involved isn't just about strategic positioning—it's about simply learning and building better products. Because sometimes you lose your aesthetic sense and forget how to deliver products with minimal friction, and startups have the least patience—the time to value must be maximized.

3. The Future of Software Interfaces: Generative UI and the Return of IDEs

John Collison: Is Microsoft considering Generative UI? Current software is still stuck in old paradigms, like 'write code, finalize, release.' In the era of cloud delivery, could we render UIs in real-time based on personal needs?

Satya Nadella: Absolutely, that's where things are heading. As generative capabilities improve, you can generate code and also generate UX skeletons around any customization. At Microsoft, we've long been thinking about what the differences between documents, websites, and apps really are. You could generate any of them instantly based on the format you want to present.

But here's the funny thing: even though everyone talks about 'apps disappearing,' our old-school IDEs—whether Excel or VS Code—are somehow back in a way. Because the reality is, AI will generate outputs, and I need to understand what those outputs mean. Actually, I need an excellent editor that lets me diff and iterate with AI.

I think one of the most exciting things is the emergence of entirely new categories of 'highly refined IDEs.' They're more like Mission Control. If I have thousands of agents running, how do I make sense of it all? That requires 'micro-steering of thousands of agents.' That's where the future of IDEs, inboxes, and messaging tools is headed. It's no longer just about sending messages or triage—it's about macro-delegation and micro-steering.

John Collison: Are you saying that in the future, not just programmers but accountants and lawyers will all have their own IDEs?

Satya Nadella: Exactly. That's how I think about collaborating with agents. I give a bunch of instructions, they start executing, sometimes running for hours or even days, and then come back to report.

For this kind of workflow, we need context for micro-steering. It can't just turn into the next 'notification hell' where I get a five-word alert with no context.

I think software will ultimately look like this: it resembles an inbox, a messaging tool, and has a canvas with a blinking cursor. We love the tabular form of spreadsheets, the linear form of documents, and message streams. Future interfaces will blend these forms. For example, GitHub Copilot Workspace, which we're experimenting with, is this kind of 'Mission Control.' You'll have five or six different branches, launch autonomous agents to execute tasks, then they return, and you triage code merges. I think the next great IDE will emerge here.

4. Lessons from History: The Internet Wave

John Collison: There's a common pattern in tech: excitement about a technology far exceeds its maturity. Like the voice AI in '2001: A Space Odyssey'—it took us 50 years to get there. I often think about Microsoft in the '90s. Bill Gates wrote the famous 'Internet Tidal Wave' memo, explicitly stating that the internet was the top priority. But Microsoft's vision for the internet then was the 'information superhighway' and set-top boxes, not the PC internet we later saw. What lessons should we take from that in the current AI wave?

Satya Nadella: That's a great question. Even back then, as a junior employee, my interpretation of that history was: we thought we understood the internet, but we really didn't.

5. The Future of Software, Workflows, and Business

John Collison: That brings me to another topic. We have IDEs for non-software engineers. I still feel like this will be a product for financial professionals in the next decade. In hindsight, this is clearly the right user interface. Like spreadsheets—when they emerged as a UI, it felt like they appeared out of nowhere. Speaking of spreadsheets, trying to challenge Excel has almost become a rite of passage for some software companies, yet Excel seems to have endured for forty years. Why is it so durable?

Satya Nadella: Yeah, it's incredible. I think it's the power of 'lists' and 'tables' combined with software's malleability—the two are a perfect match. That's why it's like an ever-present 'blinking canvas.' We might add lots of fancy features on top, but the core logic remains unchanged. And Excel is Turing Complete. We often don't give it the credit it deserves—it's the world's most accessible programming environment. You can start programming without even thinking about it. There's also something beautiful about it. Like how we now discuss AI needing 'change management,' but when spreadsheets emerged, no one talked about change management—people just started using them, and workflows naturally changed.

It's like someone described to me: he joined General Reinsurance during the fax machine era, and he remembers how email and Excel fundamentally disrupted entire workflows. I think AI will do the same—work outputs and workflows will be re-examined from the ground up. This is an incredibly interesting time to be in software, much more so than five or ten years ago.

If you asked me what was hot back then, it was cloud computing, multi-region databases like Cosmos DB, etc. We thought we'd reached some stable state. Then the pandemic hit, and cloud computing entered another hyper-growth phase (with the explosion of Teams and other apps).

John Collison: At Stripe, we've seen similar charts—there was a clear discontinuity where e-commerce activity jumped stepwise and never fell back. Even after people returned to physical offices, online activity remained at that elevated level or even continued rising. I believe Azure saw the same.

Satya Nadella: Exactly, it never decreased. Since we're talking about business, let's discuss the work we're doing together.

What we're exploring is: What are the most merchant-friendly rules? What are the most customer-friendly rules? Is there a perfect match? 'Conversational Commerce' has been a topic of discussion. Now I think, with the progress you and we are making, we can truly bring merchants and end-users together to create this 'agentic' experience. This is still early days—it must be done tastefully, in a way that earns user trust. I'm very excited about it.

John Collison: We've seen how past attempts differ from now. There have been similar attempts before, like buying directly on Twitter or Instagram, but the difference this time is: first, with AI, merchant integration is vastly easier than ever before; second, this experience is highly appealing to end-users. Early customer feedback data confirms this. We launched related functionality on ChatGPT a few weeks ago, and the data shows this approach is far more convenient for end customers.

Satya Nadella: I'm always looking for related gear. But whether it's Amazon or Walmart, the search experience within websites can sometimes be really poor. The current chat experience starts great but often ends up directing users back to traditional product catalogs. While catalogs remain core, seamlessly integrating 'checkout' with 'catalog' would be truly smooth.

John Collison: When doing product research, I found that searching with AI apps works far better than traditional keyword searches. Surprisingly, until last year, we thought keyword searches were acceptable.

Satya Nadella: It's like having a catalog tailored just for you. This isn't just a search engine results page. For example, when buying furniture for our home, we'd discuss: 'There's so much space here—what would look good? The dimensions need to fit, and the style should be high-end but not flashy.' We couldn't search like that before—it was crazy. Now, this ability to customize, convey ambiance, and overall aesthetic is within reach.

My wife is an architect, and she uses Copilot Notebook to record all her drawings. She can ask AI very high-level reasoning questions, like 'What should I put inside?' The AI can read architectural sketches, combine them with public furniture catalogs, reason, and compose elements. It's truly magical.

John Collison: In Stripe's commerce space, we're embracing AI wholeheartedly. We believe a massive amount of work will shift here. If you're doing open-ended exploration, like 'I want to buy an outfit for an occasion,' AI provides a far superior experience than clicking through a list of search results. Even for highly targeted searches, like 'I need this specific bike component,' specifying exact parameters through AI is much better.

If AI can encompass both non-directed discovery and highly directed search, that basically covers all commercial behavior on the internet. The only thing left might be recurring essential purchases, like 'reordering pet food.' Of course, Etsy is a great first partner for us because all their products are custom.

Satya Nadella: Yes, that makes perfect sense. At the 'discovery' level, platforms like Instagram have already done well. Now the question is, what's the new discovery layer in this conversational interface? Pinterest made interesting attempts. If we combine this discovery layer with the conversational interface, it will benefit everyone.

John Collison: One thing we're working on is making merchants' product catalogs, inventory, etc., remotely discoverable and purchasable. Users don't have to jump to the merchant's side to complete the entire process—they can do it directly within an intelligent experience like Copilot. That's what we're connecting at the infrastructure level. That's also why we think social platforms like Pinterest, Instagram, and Twitter will try e-commerce experiences again—because now there's more merchant support and adoption.

Satya Nadella: We have a project called NLWeb aimed at collecting every merchant's catalog, providing them with a natural language interface similar to a website, allowing AI agents to interact with it and perform deep searches. Yes, one of the biggest challenges today is catalog quality and the ability to perform deep searches using reasoning. If you can solve that, every product can find its precise query match.

John Collison: We're building this platform. In the Agentic Commerce space, we've launched open-source protocols like the Autonomous Agent Commerce Protocol. Of course, we also have our regular Stripe payment products. From a payments perspective, this is tricky because you want AI apps to represent users and make payments across different sites on the web without sharing all payment details globally.

What we're building in Agentic Commerce is a platform business. You're excellent at this—what advice do you have for us as we build products in this early stage, especially when product-market fit is already evident?

Satya Nadella: I think you're on the right path. That means participating in this agentic workflow. Right now, every merchant has to go to a provider like Stripe and say, 'Hey, I have a catalog, I have a checkout page—please help me connect with AI agents in the most frictionless way.'

That's exactly why I'd use Stripe. I think the ability for long-tail merchants to easily click and enable 'Agentic Commerce' will be a huge driver. The good news is, while ChatGPT is currently the biggest gateway, Google, Meta, Perplexity, and us will all be in the space—there will be plenty of competitive entry points.

What's even more interesting is that these platforms themselves also want to support natural language queries on their websites or apps. Therefore, this needs to be effectively addressed. You can't expect a small merchant to deploy an MCP (Model Context Protocol) server or implement various complex protocols. There must be a 'simple button.'

John Collison: I think another trend we'll see is that many agent-based experiences are gradually converging. Des Traynor from Intercom is creating AI-assisted or even AI-replaced customer service. They've found that users initially come for help but discover it's a better way to browse the website. It's almost like a command line.

I wonder to what extent these experiences will converge? For example, we handle purchases on one end and customer service on the other. When will it become a universal command-line application? Returning to the example in the fashion sector, the current experience is still based on poor keyword searches and manual tagging. In my view, this should entirely be an interactive, AI-based experience, similar to Midjourney's prompts, where you can say, 'The image isn't quite right; please modify it this way.' Doing this in the commercial sector would be highly interesting.

Satya Nadella: Intuitively, customer service is also a form of internal sales. In an agent-based world, the seams between these spliced components will no longer be as apparent as they are today.

6. AI Brand Loyalty

John Collison: Perhaps those 'swim lanes' (functional divisions) created due to software and organizational structure limitations, such as the distinction between customer service and SDR (Sales Development Representative), will likely be abandoned.

We talk a lot about models like Copilot, ChatGPT, and Gemini. There's debate over how important model quality is. Will people be loyal to an AI brand the same way they are to Coke? While Coca-Cola faced backlash for changing its formula, people still have brand preferences. For instance, I use the o3 model, while my wife uses GPT-5. I'm always surprised why she doesn't use a smarter one, but she's loyal to GPT-5. Users also rebelled when GPT-4 was taken away. Do you think people will remain loyal to specific models or AI brands? How will this affect business strategies?

Satya Nadella: In consumer products, this is the first time we're seeing this. When models change, the impact isn't uniform for everyone. Factors like personality and style become new dimensions. This is also an argument that 'style' could become a differentiating factor. It's like a combination of IQ (Intelligence Quotient), EQ (Emotional Quotient), and stylistic elements.

But in the long run, I believe we must ensure that models are most capable of handling the toughest, high-value tasks. As a product builder, my view is: while we'll showcase the strongest model, what's actually used in production is a combination of multiple models.

One of my favorite examples is the new feature on GitHub that uses 'Auto' mode. While people obviously still prefer models like Sonnet and want to use them, ultimately, what I truly want is an intelligent 'model selector.' It can't just be a simple router; it must have intelligence to judge: 'This task requires this level of cognitive resources or this type of intelligence; this is the complexity level for a code repository or PR task.'

That's the future of agents. You need an ensemble of models and some agents in the middle to coordinate this combination to meet your needs.

John Collison: Isn't letting users choose themselves also a form of intelligence? For queries like 'Where should I go for ice cream?' I'd manually select o3 because I always want the best.

Satya Nadella: Maybe, but it's more about habit. Indeed, we all dislike having default settings changed. If the model selection feature were removed now, it would indeed be a problem. But I also believe that if I can trust the system to always make choices in my best interest, this 'handoff' would bring a sense of pleasure. If we can build that trust, that's indeed the goal.

7. Microsoft's Technology Stack Layout

John Collison: So, regarding Microsoft's models, you have a presence at every layer of the technology stack: Copilot, stake in OpenAI, Azure layer, chips, etc. What must you win in this stack? Will you create industry solutions?

Satya Nadella: At its core, I conceptualize it in two layers.

First is our infrastructure business. We must be highly skilled at building what I call 'Token factories.' It's about how many tokens can be generated per dollar, per watt, and we need to be extremely efficient in this regard.

Then there's another layer, which I call the 'Agent factory.' The difference from the Token factory is that the Agent factory most effectively utilizes these tokens to drive business outcomes or consumer preference results. It's about the value of each token.

Around these two cores lies a whole set of tools. It's somewhat like the new application server layer. Every new platform has its counterpart, like web servers in the World Wide Web era. Now, this is the AI server or AI cloud.

So, we'll definitely build our own intelligent systems, namely the Copilot family: Information work: Microsoft 365 Copilot. Software development: GitHub Copilot. Security: We'll definitely be a major force in this. These will be the three horizontal mainlines. Additionally, we have business applications. In vertical sectors, we've done a lot in healthcare and science.

Healthcare: We acquired Nuance and now have a product called DAX Copilot for speaker separation and recording of doctor's notes. This gives doctors more time with patients, with AI handling everything from coding to meeting minutes. It's also embedded within the Epic system.

Science: This is a vast field I call 'outer-loop orchestration.' The scientific method essentially involves formulating hypotheses, conducting experiments in computers, and refining through feedback. This is a toolchain. We're trying to combine GitHub Copilot with Microsoft 365's knowledge work capabilities to serve scientists. This might even involve interfacing with MCP servers in labs to coordinate everything and accelerate the scientific loop.

John Collison: As a platform company, you always need to decide when to bundle products together and when to let them stand alone. Apple initially only allowed iPods to be used with Macs to drive Mac sales before opening them up to Windows. Microsoft's history is also full of such examples. Early on (around 1985), Microsoft was very open, with most revenue coming from Macintosh applications and third-party apps like Lotus 1-2-3 on the operating system. Later, there was a time of tight coupling between Windows and Office. Azure initially followed suit but later fully embraced Linux. Now, as a platform company, Microsoft seems increasingly accepting of modularity, like how Stripe Radar can be used even without using Stripe payments. How do you generally view this framework? When to couple, and when to sell independently?

Satya Nadella: That's a good point. The way I think about this issue is that we often exaggerate the so-called 'zero-sum game.' In reality, analysis in many areas should be more nuanced because they are, by definition, multi-player interactions.

Cloud computing is a classic example. When I first started working on Azure, AWS was already far ahead. People would say to me, 'Oh my gosh, hasn't AWS already won?' But it turned out the market was large enough and multi-participant. So, was there still room for a second cloud provider? After all, there was competition from companies like Oracle and IBM. In all the middle-tier server areas, my feeling at the time was that enterprise and commercial customers would generally demand some diversification. This was the structural insight that motivated us to dive in, and the rest is history.

Over-packaging things might actually narrow your Total Addressable Market (TAM) to some extent, making you uncompetitive. For example, if we had built Azure as we used to call it, Windows Azure, and if it had really only been called that, it would have been a problem. Because Azure couldn't just serve Windows; it had to support Linux as a 'first-class citizen,' and it had to support MySQL and PostgreSQL as 'first-class citizens.'

To some extent, this was also to ensure we could excel at SQL Server. But we had to make our work as impactful as possible, just like Amazon using PostgreSQL or MySQL. The main driving force was the overall Total Addressable Market (TAM), which is also what customers expect from us. While we'll face fierce competition, this, to me, precisely defines modularity.

What truly maximizes the market opportunity for my technology stack? We're a focused company, which is why we're not a conglomerate. Therefore, there should be a theoretical framework for integration benefits and platform effects. What is it? How do we excel at it?

I believe this should be the case at every layer of the technology stack. Even at Azure's infrastructure layer, customers should be able to say, 'I only want to use Azure's bare-metal services; I just need Kubernetes distributed clusters, but I only need you to manage that part; I'll bring all my software.' No problem, we have to win that workload. Maybe someday, when they find managing multi-region databases too cumbersome, they might say, 'Oh, I'll use Cosmos DB then.' But that should be an independent decision.

John Collison: Isn't this always debatable? Whether more Azure is sold only when bundled with Linux and Azure? People from the Windows team might say, 'Yeah, but you're holding back Windows Server.' Some areas are open as you described, but in others, like Microsoft Flight Simulator not being available on PlayStation but on Xbox, this integration feels natural. For example, Teams' chat and video features aren't sold separately; they're part of the same thing, making the whole package more attractive. So, don't you always end up in this debate: do the costs of bundling outweigh the benefits?

Satya Nadella: Yes, I think some examples, like Teams, are classic cases. Teams as a product was born by integrating four things like Outlook. The introduction of Outlook was similar; previously, we had PIM (Personal Information Management), a standalone email client, and a separate calendar. Outlook was the first 'scaffolding' to combine these three to accomplish a task. Teams did the same, integrating chat, channels, video, and other features. In this case, bundling is the product, the product's scaffolding.

Of course, you could also say, 'Hey, there needs to be an open market, and it needs to integrate with other things.' So modularity must be well thought out and meaningful at the atomic level. You can't overthink those synergies or integration effects; otherwise, you'll lose competitiveness. A classic example is if you build an amazing public cloud, but it only runs Windows workloads or SQL workloads, you'll basically only capture a small portion of the market.

So, meeting customer needs aligns with our interests. My understanding of the AI stack is similar: we have an infrastructure business, an application server/data layer business, and an application business. This is just a simplified explanation. I hope those three things can exist independently, standing on their own merits. Of course, we hope there's a feedback loop between these three layers, but customers and partners should be able to choose which door to enter autonomously.

John Collison: My impression is that when you took over Microsoft, you shifted the company culture from a highly bundled state. Previously, you bought a Windows machine, ran Microsoft Access and SQL Server, and everything was neatly packaged together, with users living in this Microsoft ecosystem. Now, you've moved towards a more open and interoperable strategy.

Satya Nadella: I think I'd say my approach actually harks back to an earlier period, perhaps the 1980s at Microsoft. Because most of the well-known things did happen in the 1990s when Microsoft was almost just Windows. But before that, many of our things started coming together, whether on the client or server side.

Like the metaphor you mentioned, it's like returning to the situation in the 1980s. For example, we developed Office for Mac when Windows came relatively late. In fact, Bill Gates' vision when founding Microsoft was to treat it as a 'software factory.' I don't have a preference for any single category; I just want to build the best software factory, constantly producing various things: you want a flight simulator? No problem, we have one; you want a basic interpreter? We have one; you want an operating system? We have one too. So, in a sense, that was the original idea.

And at some point, we got stuck in a stalemate between four or five parts, like Windows, Windows NT, and client/server architectures. So when I became CEO, and even before that when I was in charge of the cloud business, I realized: this is precisely the time when the market will become larger and vastly different. At that time, we didn't have a mobile platform either, so it was indeed necessary to ensure we remained relevant in the largest markets by configuring our product portfolio reasonably to cover them.

Frankly speaking, if this is not in the company's core DNA, I don't think it can be executed very well just because I, as the CEO, say 'I want to do this.' In fact, we can bring software to every platform, and this itself is part of the company's core DNA.

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.