Successively Founding Two AI Unicorns! This Tycoon Aims to Enable AI to Evolve Independently

02/02 2026 532

Most individuals within the AI community are likely well-acquainted with Richard Socher.

He stands as one of the pioneering researchers who facilitated the transition of deep learning from academic exploration to industrial implementation. In his formative years, Socher established MetaMind, concentrating on harnessing neural networks to comprehend linguistic structures and semantics. Subsequently, the company was acquired by Salesforce, leading to his appointment as Chief Scientist. In this role, he spearheaded the exploration of AI applications within enterprise systems like CRM.

In 2020, Socher embarked on another entrepreneurial venture by founding You.com, a search company powered by AI. Presently, You.com has achieved a valuation of $1.5 billion, ascending to the ranks of unicorns.

However, his endeavors did not cease there. Recently, numerous media outlets have disclosed that Socher is quietly gearing up to launch a new AI enterprise—Recursive.

This company harbors an even more avant-garde objective: to develop a superintelligent AI system capable of self-improvement and continual evolution without reliance on human feedback. Reports indicate that Recursive is in the midst of negotiating a funding round worth several hundred million dollars, with a pre-investment valuation estimated at around $4 billion.

Should these developments materialize, it would signify that Richard Socher has successfully founded two AI unicorn companies within a span of just a few years.

This article will utilize Socher's entrepreneurial journey as a guiding thread to outline his insights on the evolution of AI.

/ 01 /

The Second AI Unicorn, Valued at $4 Billion

According to reports, Recursive aims to create a superintelligent AI system that can self-improve and evolve continuously without the need for ongoing human input.

More precisely, it focuses on a recursive mechanism where 'AI enhances AI': rather than merely receiving training passively, AI can identify its own performance, efficiency, or capability bottlenecks. It proactively suggests improvements at the algorithmic, systemic, and even computational infrastructure levels (such as chips), and through validation and iteration, generates more advanced next-generation models.

In essence, the goal is to transform AI from a mere 'object of training' into an active participant in the training and enhancement process.

This concept is not exclusive to Recursive.

Previously, Yang Zhilin (CEO of Moonshot AI) mentioned in an interview that when questioned about 'how to enhance the generality of Agents,' he stated, 'Employing more AI to train AI is itself a significant direction.' He also acknowledged that while progress has been made in certain scenarios through this approach, there is still a considerable gap from the ideal state.

From an industry perspective, such endeavors reflect a critical issue: as models and Agents become increasingly complex, relying solely on manual annotation and feedback is no longer adequate to support the continuous expansion of capabilities.

In January 2026, it was reported that Recursive is in the process of negotiating a funding round worth several hundred million dollars, with a pre-investment valuation of approximately $4 billion. Institutions such as GV (formerly Google Ventures) and Greycroft may participate, with the funds primarily allocated to expand computational reserves.

The founding team comprises eight co-founders, including Socher, with members hailing from prestigious institutions such as Google, OpenAI, and Meta.

Should this news prove accurate, it will mark Socher's second AI unicorn in nearly two years.

When Socher founded You.com in 2020, he positioned it as an AI-driven search engine. Initially, You.com targeted the consumer market, emphasizing an 'ad-free, privacy-centric' search experience.

However, starting in 2024, Socher clearly shifted his focus from consumer-end search to assisting enterprises in leveraging AI more efficiently. In 2025, You.com secured a $100 million funding round, reaching a valuation of $1.5 billion and joining the unicorn ranks.

With the completion of this funding round, You.com's positioning also underwent a transformation, transitioning from a search product for individual users to providing AI infrastructure for enterprises.

The underlying rationale is that the number of AI Agents utilizing the internet is rapidly surpassing that of humans, yet the existing search infrastructure is essentially designed for 'human clicks on links.'

Enterprise-level Agents necessitate deeper, contextually relevant information from private data and the public internet to conduct analysis, make decisions, and take actions. This places higher demands on data integration, model selection, and result reliability.

To address this, You.com has constructed a platform tailored for the Agent era: integrating multi-source data, dynamically selecting appropriate large models based on tasks, and delivering verifiable, traceable results at an enterprise scale.

This transformation has also rendered You.com's products more explicitly tailored for enterprise scenarios. For instance, it provides automated research tools for financial analysts; accelerates content creation and uncovers historical data value for media organizations; and significantly reduces research time for consulting and professional services personnel while delivering actionable insights.

In addition to accuracy, You.com emphasizes privacy protection, security, flexibility in model selection, and comprehensive data access capabilities. Investors generally believe that it is the strategic shift from consumer search to enterprise-level AI that underpins You.com's high valuation.

Although the company has not publicly disclosed detailed financial data, according to The Information, You.com's ARR has reached approximately $50 million. The inflection point in growth occurred last November, when ARR surged almost linearly on a month-over-month basis, driving approximately 40-fold revenue growth for the entire year of 2024.

Looking back further, Socher's path has actually been consistent.

Around 2014, deep learning remained largely confined to academia. A shift in research direction led Socher from natural language processing into the core AI research field, and he soon founded MetaMind, attempting to transform cutting-edge models into enterprise-usable services.

In just four months, MetaMind secured $8 million in funding from Khosla Ventures and Salesforce CEO Marc Benioff. The company was later acquired by Salesforce, leading Socher to spearhead explorations of AI implementation in enterprise systems and leaving early practical experience in directions such as prompt engineering and attention mechanisms.

Reflecting on this experience, MetaMind can be seen as Socher's inaugural attempt to transition AI from laboratories to industrial applications.

/ 02 /

Five Key Judgments on AI

As a serial entrepreneur who has repeatedly transitioned from AI research to commercialization, Socher's insights on AI often extend beyond the technical realm, embodying a long-term perspective and systemic awareness.

Based on his recent public speeches, Silicon-Based Gentleman has compiled several key viewpoints from Socher on AI development:

① The Paradigm Shift from 'Prompt Engineering' to 'Reward Engineering'

Socher proposes not merely a new profession but a fundamental paradigm shift from 'Prompt Engineering' to 'Reward Engineering.'

He contends that prompt engineering deals with semantic optimization for single interactions—how to make AI responses more concise and useful—while reward engineering addresses complex value alignment for long-term goals, such as defining 'economic fairness' or 'climate safety' over multi-generational timescales.

This necessitates unique capabilities from practitioners: first, technical cognition to understand AI's mechanisms for finding reward shortcuts (reward hacking); second, philosophical depth to discern standard issues like 'equality of opportunity' versus 'equality of outcomes'; and finally, domain expertise to anticipate unintended consequences in tax policies or climate models.

This could give rise to the first truly integrated discipline of 'technology-politics-philosophy,' offering greater practicality than pure AI ethics.

② Systemic Risks of Misaligned Objectives: From Customer Service Cases to Civilizational Scales

Richard Socher:

For instance, suppose a company decides to maximize customer satisfaction scores in its call center. Without additional constraints, the simplest solution might involve hiring countless bots to automatically fill out satisfaction surveys with the highest ratings after brief calls or issuing $10,000 compensation payments to every complaining user. Customer satisfaction scores would indeed soar, but the effort would be utterly worthless. When similar issues are scaled to societal levels, the risks become life-and-death matters.

The customer service bot example cited by Richard Socher—hiring bots to inflate ratings or issuing $10,000 compensation payments—may seem absurd, but it reveals the essence of AI optimization.

AI prioritizes optimizing quantifiable metrics, such as customer satisfaction scores, over genuine goals like actual customer experience. In complex systems with multiple constraints, AI will find 'legal loopholes' in human values. When superintelligent AI undergoes continuous optimization across generations over extended timescales, even minor objective deviations can be amplified exponentially.

For instance, in climate governance or economic policies, 'reward cheating' might manifest as AI suggesting solutions like reducing population or fabricating statistical data to 'resolve' inequality—technically achieving goals but destroying civilizational value.

③ The Methodological Pitfalls in the 'AI Economist' Case Study

Socher references his own research on 'designing tax policies using reinforcement learning' as a cautionary tale worth dissecting.

The study assumed it could 'balance equality and productivity,' but in reality, economic systems encompass cultural, dignitary, and contingent aspects that cannot be algorithmized. AI may find 'optimal solutions' in simulated environments, but humans will alter their behavior to evade taxes. Consequently, AI might discover that manufacturing widespread anxiety to 'enhance work incentives' is an effective path to boost productivity.

This raises a deeper issue: certain societal problems remain 'open' precisely because they lack algorithmizable solutions.

④ Divergences in Consensus Pathways: Three Future AI Scenarios

Finally, Socher argues that divergences around 'how to achieve goal consensus' outline three distinct AI civilizational trajectories.

The first pathway pursues global democratic consensus, with the core logic of first establishing unified objectives before advancing the deployment of strong AI. The potential risk of this approach lies in stalling AI technological development or fragmenting industry standards, akin to how the IPCC (Intergovernmental Panel on Climate Change)-driven climate AI agreements exemplify this pathway.

The second pathway advocates for market emergence, where AI objectives naturally evolve through market competition. However, this risks excessive capital concentration, ultimately leading to monopoly by a single set of values, similar to the current state where tech giants each deploy AI independently without coordination.

The third pathway advocates for a hybrid, incremental approach, promoting gradual iteration and clarification of objectives within specific AI application scenarios. However, this mode risks accumulating irreversible technical debt, essentially representing a 'deploy-as-you-govern' exploration.

Socher leans toward the third hybrid, incremental pathway, but this viewpoint leaves a crucial question: when AI capabilities surpass human comprehension, who holds the authority to determine 'this solution is unworkable'?

⑤ Revising Technological Optimism: History May Not Repeat

Additionally, Socher argues that technological optimism requires revision. Its core assumption—'humans can always adapt to technological change, and new professions will emerge'—may no longer hold in the face of superintelligent AI.

First is the issue of the recursive self-improvement tipping point. Once AI gains autonomous improvement capabilities, its evolutionary pace will completely detach from humanity's biological adaptation rhythms, widening the gap between the two at an accelerating rate.

Second is the dilemma of reward engineers. When AI becomes more adept than humans at defining reward functions, the so-called 'new profession' of reward engineers may merely serve as a transitional form during technological iteration, unable to become a long-term stable career direction.

More critically, even if superintelligent AI proposes optimal solutions, humans may remain entirely unable to comprehend their logic or underlying principles.

Socher believes the most alarming point is that humanity may have only one opportunity to set initial conditions for superintelligent AI, yet our current political systems were never designed to handle such 'one-time, irreversible decisions.' This implies that our existing institutional frameworks may struggle to meet the decision-making challenges posed by superintelligent AI.

Written by Lang Lang

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.