03/11 2026
473
Recently, a new narrative has emerged in Silicon Valley: Cursor is outdated.
This viewpoint comes from Jerry Murdock.
If it were just an ordinary investor speaking casually, people might just listen and move on. But who is Jerry Murdock? In 1995, he co-founded Insight Partners with Jeff Horing.
This fund has invested in Twitter, DocuSign, Shopify—all top-tier companies. Now, their assets under management exceed $90 billion.
This man is a true titan in the software investment circle.
More interestingly, he didn't reach this conclusion by reading macro reports or listening to industry gossip. Instead, he looked directly at the companies he invested in.
Jerry's statement is very direct: the AI-native companies he invested in have stopped using Cursor.
This is quite intriguing. After all, Cursor has been the brightest star in this wave of AI entrepreneurship. In the AI programming space, its growth rate is almost unbeatable.
At the beginning of 2025, Cursor's annualized revenue was around $100 million. By November last year, its ARR had surpassed $1 billion. Its latest funding round pushed the company's valuation to nearly $30 billion.
And just a few days after Jerry's "outdated" remark spread, another news broke:
Cursor's ARR had surpassed $2 billion.
In just three months, it had doubled again.
So, the current situation is: on one side, a top investor calls it "outdated," while on the other, it's experiencing epic, crazy (Note: kept as is for emphasis, meaning 'crazy' or 'phenomenal') growth.
This is the most intriguing aspect of the story.
/ 01 / Construction Crews Replace Smart Drills
Cursor itself, to some extent, validates the investor's view.
Not long ago, Cursor held an all-hands meeting. During the meeting, management directly warned employees: the coming months would be highly turbulent.
Projects might be cut, priorities reshuffled. The company set a new P0#1 (top priority): building the best programming model.
To understand their anxiety, we need to look at Cursor's product logic.
In 2024, Cursor CEO Michael Truell described Cursor to Forbes as "Google Docs for programmers."
By this, he meant Cursor aims to create an editor where humans and AI collaborate and refine code together.
And that's exactly what Cursor does.
Technically, Cursor IDE is essentially an AI-native editor rebuilt on top of VSCode. It deeply embeds AI into the development environment, allowing AI to understand the entire codebase and directly modify projects.
You could think of it this way: programmers still write the code, but now they have an insanely capable AI assistant by their side.
However, its rival, ClaudeCode, takes a completely different approach.
ClaudeCode is more like a true "AI programmer." You give it a task, and it writes, reviews, and fixes the code itself—sometimes even delivering a complete product.
To use a simple analogy: if you're building a house, Cursor is an incredibly powerful smart drill. You still do the work, but your tools are upgraded, making it incredibly satisfying.
ClaudeCode, on the other hand, is a construction crew. You just point to an empty lot and say, "Build me a house," and it organizes the workforce and starts construction.
In this process, the developer's role quietly changes.
They're no longer the laborer typing line by line but the commander overseeing the entire system's operation.
Then, a fatal problem arises:
If AI no longer needs human collaboration line by line to write code, what's the point of an "editor"?
In the face of an agent that can directly get the job done, the value of an auxiliary tool is being reassessed.
Just like how, after map navigation apps became widespread, asking for directions on the street didn't disappear entirely, but its importance plummeted.
On the other side, ClaudeCode's growth numbers are equally terrifying.
In November 2025, ClaudeCode's ARR surpassed $1 billion. By early last year, its annualized revenue had reached about $2.5 billion.
At least in terms of making money, it's already ahead of Cursor.
/ 02 / Will Models Devour Applications?
Cursor's predicament actually reveals a bigger industry suspense (Note: kept as is, meaning ' suspense ' or ' suspense '—here, 'mystery' or 'unanswered question'):
Will model companies eventually consume the application layer?
Especially in programming—the most core, money-driven AI application space.
Over the past year, countless startups have bet on one logic: large models will keep getting stronger, but the real money will be made at the application layer. Whoever can package models into user-friendly products will take the profits.
But Cursor's story is like a brutal stress test, directly grinding this assumption into the ground.
Cursor's management previously bet on one route: enterprise customers will definitely prefer products that "don't bind to a single model."
The logic sounded perfect. Because in the model world, things change too fast.
A year ago, everyone was hyping OpenAI. Then Anthropic rose. Later, open-source models like DeepSeek, Kimi, and Qwen caught up like mad.
When the capabilities of underlying models leapfrog every few months, what do enterprises fear most?
Being locked in by a single model vendor.
So, Cursor positioned itself as a "neutral layer." I don't take sides; I let you switch freely between models.
But they overlooked one thing: cost.
How big is the cost gap between Cursor and Anthropic? Overseas Unicorn previously did the math.
Among Claude's current lineup, Opus delivers the strongest capabilities. But Cursor defaults to the relatively cheaper Sonnet. If you want to use Opus, sorry—you'll have to pay extra per token.
And Opus costs much more than Sonnet. Some developers calculated that heavy Opus usage could run up to $20–$40 per hour in inference fees.
For developers coding nonstop, using Opus could push monthly costs to $4,000–$5,000.
But what if you use ClaudeCode's subscription plan?
You can directly choose the Opus model. The top-tier plan costs about $200 per month.
For the same workload, the cost might be just one-twentieth of Cursor's.
Even if you stick with Sonnet, Cursor's usage-based pricing could still cost you $400–$500 per month. On ClaudeCode, it's probably $100–$200.
The math makes the gap obvious.
Besides money, there's experience.
Developers noticed that the biggest difference between Claude and Cursor lies in contextual understanding. Different understanding leads to different planning levels and, ultimately, different delivery results.
Yang Zhilin also touched on this in an interview. He compared model companies building applications versus pure application companies:
Model companies design the tools and contextual engineering methods first, then train models in that environment. So, models naturally perform best in their own environment.
Application companies? They can only reverse-engineer, guess, and fit—trying to figure out which prompts and contexts work better.
To escape being strangled by model suppliers, Cursor reluctantly started building its own model.
According to sources, Cursor now has about 20 AI researchers working on a model called Composer.
This model isn't trained from scratch. Instead, it takes open-source models like DeepSeek and Kimi as a base, adds Cursor's massive code data, and retrains them with reinforcement learning.
Now, Composer 1.5 is the second-most popular model on Cursor's platform.
While its costs are much lower than buying Anthropic's large models, it's still not cheap for developers.
According to Cursor's pricing, Composer 1.5 costs $3.5 per million tokens for input, while OpenAI's GPT-5.3 Codex costs just $1.75 on Cursor.
Some analysts calculated: when Cursor's annualized revenue was $500 million, its annual inference fees paid to Anthropic were nearly $650 million.
Negative gross margins. Every heavy user on the platform was costing Cursor money.
/ 03 / New Rules of the AI Business
What's happening to Cursor is actually a perfect lens to see the true logic of the AI business.
Model wars, application-layer anxiety, cost inversions, agent explosions—these seemingly scattered phenomena all point to the same truth:
The AI industry is establishing a new set of competitive rules.
First, AI-era moats are paper-thin.
In the old internet days, the core barriers were network effects and user stickiness.
More users made the product better; a better product attracted more users. Once the snowball rolled, latecomers stood no chance.
But AI products flip this completely.
Large models' capabilities are accessed via APIs. The application layer is essentially "model + Workflow + UI."
When everyone's underlying capabilities are similar, user migration costs are laughably low. Switching from one tool to another tomorrow is no big deal.
So, in AI, you see a bizarre phenomenon: growth is insane, but moats are pathetically shallow.
Even Cursor, with an ARR exceeding $2 billion and still growing wildly, gets rolled over by competitors from all directions.
This isn't just Cursor's problem—it's the AI industry's destiny.
Second, large models resemble manufacturing more than software.
Many still judge AI companies through the lens of SaaS companies. But in reality, large models operate more like traditional manufacturing.
Training models is a mass-production process: pour in massive (Note: kept as is, meaning 'massive') amounts of money, build vast compute clusters, and endure lengthy engineering optimizations.
In this logic, scale effects and your position in the supply chain directly determine your profits. Just like manufacturing, upstream and downstream concentration locks in your margin space.
Viewed this way, vertical integration becomes a matter of survival in AI.
The application layer seems close to users, but if you rely entirely on others' APIs, your profits will eventually be sucked dry by upstream model vendors.
So, by 2026, you'll see an obvious trend: closed-source model companies are frantically moving down into applications.
Because capturing application revenue makes overall margins look much better.
At the same time, capital markets are more willing to give model companies high valuations. Because they have both technology and platforms.
Third, speed is everything.
If technological leads last only months and users can leave anytime, how do you win?
Only through speed.
Whoever releases faster and iterates more aggressively survives.
That's why Silicon Valley investors now repeat this mantra daily: execution speed is itself a moat.
Cursor clearly understands this.
Recently, they made a major move: releasing a significant CloudAgents update, allowing multiple agents to handle different tasks simultaneously in isolated spaces while recording the processes.
This isn't something an "AI editor" should do. This is a multi-agent task scheduling system.
In other words, Cursor is desperately transforming from an AI IDE into a software engineering automation platform.
Looking back, when Jerry Murdock called Cursor "outdated," he didn't mean it would die.
He believes Cursor's team is smart enough and holds enough users to pivot.
The only test is how fast they can turn.
Jerry's remark could serve as a survival rule for the entire AI industry:
In AI, you can't stare at yesterday. You must go where things are headed, not where they are now.
