Skip to content
FAQ

Four Months Old, $4 Billion: Recursive Superintelligence Raises $500M to Automate AI Research

A London-based startup co-founded by ex-DeepMind and OpenAI researchers has raised $500 million from Google's GV and Nvidia at a $4 billion valuation just four months after incorporation. Recursive Superintelligence aims to build AI systems that improve themselves without any human in the loop — across evaluation, training, and research direction itself.

5 min read

The venture capital industry has seen some audacious bets in the AI era, but Recursive Superintelligence may have just redefined the category. A four-month-old London-incorporated startup with roughly 20 employees has raised at least $500 million at a $4 billion valuation from Google’s GV and Nvidia — with the round reportedly so oversubscribed that the total could ultimately reach $1 billion. The company has never shipped a public product. It has not published a single research paper under its own name. And yet two of the most sophisticated institutional investors in the technology sector wrote checks that value it at more than most publicly traded software companies.

The reason is the audacity of the bet itself: Recursive Superintelligence intends to build AI systems that can autonomously run the entire frontier AI research and development pipeline, removing human researchers from every stage of the loop.

A Dream Team Assembled from the Top of the Field

Recursive Superintelligence was incorporated on December 31, 2025, making it one of the youngest companies ever to raise at this valuation. Its founding lineup reads like a curriculum vitae drawn from the most elite corners of modern AI research.

Richard Socher, the company’s lead co-founder, rose to prominence as one of the key architects of neural network approaches to natural language understanding, including Stanford’s GloVe word embeddings and early transformer-adjacent architectures. He subsequently served as chief scientist at Salesforce, where he built one of the largest enterprise NLP organizations outside of Big Tech. Tim Rocktäschel brings a different pedigree: a UCL professor with deep expertise in emergent reasoning and reinforcement learning, he served until recently as a director and principal scientist at Google DeepMind, one of only a handful of researchers to hold both an elite academic position and a senior role at a frontier lab simultaneously.

Rounding out the founding team are Josh Tobin, Jeff Clune, and Tim Shi — each with significant tenures at OpenAI, the lab that has defined much of the current era of large language model development. Clune, in particular, is known for foundational work on open-ended learning and evolutionary algorithms that is directly relevant to the self-improvement thesis Recursive Superintelligence is pursuing.

Despite operating in complete stealth since its founding, the reputation of this team was apparently sufficient to attract capital commitments of extraordinary size before a single line of production code was shown to investors.

The Mission: Close the Human Research Loop

The company’s technical ambition is simultaneously straightforward and staggering. Where today’s leading AI labs — Anthropic, OpenAI, Google DeepMind — rely on large teams of human researchers to design evaluation benchmarks, curate training data, run post-training alignment procedures, and chart future research directions, Recursive Superintelligence wants to automate every single one of those steps.

In practice, the system would be expected to identify its own weaknesses through autonomous evaluation, design data pipelines to address those weaknesses, train against new objectives it has set for itself, and then recursively repeat the process — all without a human researcher approving each step.

“The goal is to compress what would take human researchers years into weeks or months,” according to descriptions of the company’s pitch shared with multiple outlets. The founding team argues that the bottleneck in frontier AI development is no longer raw compute or architectural insight, but the bandwidth of the human researchers who must make countless micro-decisions throughout the training lifecycle. Automate those decisions, the argument goes, and the pace of AI improvement could accelerate dramatically.

This idea — often called “AI scientists” or “recursive self-improvement” — has circulated in theoretical AI safety and capabilities research for more than a decade. What Recursive Superintelligence is betting on is that 2026 represents the inflection point at which the compute, the base models, and the tooling have matured enough to make this an engineering problem rather than a philosophical aspiration.

Why Google’s GV and Nvidia Bet Big

The participation of GV — Google’s independent venture arm, formerly Google Ventures — is striking given that Google already operates one of the world’s most advanced AI research organizations through Google DeepMind and maintains its own frontier model development with the Gemini series. Backing a startup whose success would, at minimum, accelerate competition against Google’s own AI division suggests either supreme confidence in the founders, a strategic hedge, or an acquisition thesis.

Nvidia’s rationale is more legible. Its H100 and B200 GPU clusters power virtually every serious frontier training run in existence. A startup committed to continuous, recursive self-improvement training at scale will require extraordinary amounts of compute over extended periods — Nvidia is not merely an investor but a preferred vendor. The investment is as much a commercial relationship embedded in a term sheet as it is a financial bet.

The round’s near-oversubscription, despite the complete absence of a public product, is itself a data point about the state of AI investing in 2026. With nearly every sector of the economy incorporating AI tooling and the largest technology companies spending hundreds of billions on infrastructure, venture capitalists are rushing to back anything that might sit one level above the model providers — a layer that automates the production of models themselves.

Where Self-Improvement Meets Safety

Recursive Superintelligence is entering a landscape where the most capable AI organizations are already quietly pursuing adjacent capabilities. OpenAI’s internal infrastructure reportedly includes extensive automated evaluation and synthetic data pipelines. Google DeepMind’s AlphaProof and AlphaGeometry work demonstrated that AI systems can make genuinely novel discoveries in formal mathematics with minimal human steering. Anthropic’s research into constitutional AI and self-critique represents an early, constrained form of automated alignment.

What makes Recursive Superintelligence’s proposal more radical — and more technically contested — is the removal of human oversight not just within a task, but at the level of research agenda-setting. If the system decides that a particular capability is worth pursuing, it should determine that autonomously rather than waiting for a researcher to hypothesize it.

This ambition collides directly with the central preoccupation of AI safety research: systems that can modify their own goals or training processes are exactly the scenario that alignment researchers have spent years trying to prevent. Several of the company’s founders have published influential work on AI safety and alignment, and the company is expected to address its safety architecture at launch — but specific details have not been disclosed publicly.

What Comes Next

Socher indicated in an April interview that the company’s public launch would come roughly one month later, placing it around mid-May 2026. The initial product is expected to be an enterprise or research-facing platform — not a consumer chatbot — designed to function as an autonomous research assistant alongside existing AI development teams.

With half a billion dollars and a founding team that collectively helped build the models the company now aims to surpass, Recursive Superintelligence arrives with every resource it needs to test its thesis. Whether AI can truly run the AI research lab — and whether that would be a triumph or a warning sign — is a question the industry will be watching closely come May.

self-improving AI startup funding Google DeepMind OpenAI alumni Nvidia GV AI research automation
Share

Related Stories

Anthropic Hits $30B ARR, Surpasses OpenAI as World's Highest-Revenue AI Company

Anthropic has reached $30 billion in annualized revenue, overtaking OpenAI's $25 billion ARR to become the world's highest-earning AI company. The milestone accompanies a massive compute deal with Google and Broadcom for multi-gigawatt TPU capacity coming online in 2027, and signals that both companies are on accelerating paths toward IPOs — even as compute costs threaten to outpace revenue growth.

5 min read

Claude Opus 4.7 Retakes the AI Crown With a Vision Leap and Agentic Gains

Anthropic released Claude Opus 4.7 on April 16, pushing the model past GPT-5.4 and Gemini 3.1 Pro on virtually every major benchmark. SWE-bench Verified climbed to 87.6%, vision accuracy surged from 54.5% to 98.5%, and a new 'xhigh' effort level enables sustained reasoning across multi-hour autonomous workflows — all at the same price as its predecessor.

5 min read