Skip to content
FAQ

Anthropic Hits $30B Revenue Run Rate and Locks In 3.5 Gigawatts of Compute with Google and Broadcom

Anthropic has tripled its annualized revenue in just 90 days, crossing $30 billion and surpassing OpenAI to become the AI industry's top revenue generator. Simultaneously, the company sealed a landmark compute agreement with Google and Broadcom for 3.5 gigawatts of next-generation TPU capacity starting in 2027—powering a future that CEO Dario Amodei admits he keeps underestimating.

5 min read

Numbers of this magnitude invite skepticism. Yet Anthropic’s revenue trajectory over the past three months is, by any conventional business metric, extraordinary: from approximately $9 billion in annualized revenue at the end of 2025, the company has reached a $30 billion run rate by early April 2026—a three-fold increase in roughly ninety days.

Simultaneously, Anthropic has secured the compute infrastructure it will need to sustain that growth: a landmark agreement with Google and Broadcom for 3.5 gigawatts of next-generation TPU capacity, expected to come online in 2027. Taken together, the two announcements paint a picture of an AI company growing faster than almost anyone, including its own CEO, anticipated.

The Revenue Milestone in Context

Anthropic’s $30 billion run rate, revealed in conjunction with the compute deal announcement on April 6-7, technically makes it the highest-revenue AI company measured on an annualized basis—surpassing OpenAI, which had reported approximately $25 billion in annualized revenue in recent months.

The caveat is that “run rate” is a forward projection, not a trailing revenue figure: it extrapolates current monthly or quarterly revenue into an annual figure, and can flatline or reverse if growth stalls. But even accounting for that, the velocity is remarkable. Anthropic ended 2025 at roughly $9 billion annualized. In just the first quarter of 2026, it added the equivalent of over $20 billion in annual revenue trajectory.

Enterprise adoption is the primary driver. Anthropic now counts more than 1,000 business customers spending more than $1 million on an annualized basis—a figure that doubled from roughly 500 in less than two months. That enterprise concentration is significant: it means Anthropic’s revenue is relatively predictable and sticky, rather than dependent on consumer subscriptions that can churn.

CEO Dario Amodei has been notably candid about being repeatedly surprised by his company’s growth. In multiple recent contexts, he has acknowledged that he “always very conservative” on revenue forecasts and has been “wrong every time”—consistently underestimating how fast organizations would shift critical workflows onto Claude. That admission from the CEO of a $380 billion company is unusual, but it tracks with how rapidly enterprise demand for frontier AI has accelerated industry-wide.

The Compute Deal: 3.5 Gigawatts and Counting

The revenue story is only sustainable if Anthropic can acquire the compute to serve demand at scale. That is where the Google-Broadcom deal comes in.

Broadcom CEO Hock Tan confirmed that Anthropic was already receiving approximately 1 gigawatt of compute from Google’s Tensor Processing Units in 2026—capacity secured under a Google Cloud agreement announced in October 2025. The new deal commits to an additional 3.5 gigawatts starting in 2027, a more than 3.5x increase from the existing arrangement.

To put those numbers in perspective: 1 gigawatt of data center power can support roughly 100,000 to 300,000 high-end AI accelerators, depending on the chip design and cooling infrastructure. A 3.5 GW deployment is roughly equivalent to the total power draw of a mid-sized American city. It is a compute commitment at infrastructure-of-nations scale.

Google’s TPUs are the central element of the deal. Unlike NVIDIA’s GPUs—which dominate AI training and inference broadly—Google’s TPUs are custom-designed specifically for tensor operations, the mathematical primitives underlying neural network computation. For a workload like Claude that runs continuously at massive scale, the cost and efficiency profile of purpose-built silicon matters enormously. Broadcom’s role is as the systems integrator and supply-chain intermediary, coordinating chip supply across Google’s TPU design and manufacturing pipeline.

Mizuho analysts estimated that Broadcom’s AI revenue attributable specifically to the Anthropic relationship would reach $21 billion in 2026 and $42 billion in 2027, based on the compute volumes implied by the new agreement. Those figures, if accurate, would make the Anthropic account one of Broadcom’s largest individual revenue relationships.

Importantly, Anthropic and its partners have stipulated that the vast majority of the new compute will be sited in the United States—a requirement that aligns with Anthropic’s broader commitment to direct $50 billion in AI computing capacity toward American data center infrastructure. That commitment, made earlier in 2026, was partly a response to geopolitical pressure to ensure that frontier AI training does not migrate offshore.

Why This Changes the Competitive Calculus

The combination of Anthropic’s revenue trajectory and compute commitment reshapes the competitive dynamics of the frontier AI market in several important ways.

Financial independence: Every gigawatt of compute secured under a long-term agreement is a gigawatt that competitors cannot easily acquire. AI compute is the scarce resource constraining every frontier lab, and Anthropic has now locked in supply at a scale that only a handful of organizations in the world can match.

Enterprise flywheel: The $1M-per-year customer cohort is self-reinforcing. Large enterprises, once they have embedded Claude deeply into their internal workflows—document processing, code generation, customer service automation, legal document review—face significant switching costs. The enterprise momentum Anthropic has built makes it substantially harder for late entrants to compete for the same accounts.

IPO positioning: Anthropic’s $380 billion valuation and $30 billion run rate put it in rare company among private technology companies globally. Reports have circulated that the company is exploring a public listing, though no timeline has been confirmed. A revenue trajectory of this slope, paired with credible long-term compute supply, makes an eventual IPO narrative significantly more compelling than it would have been even six months ago.

The Road to ASL-4

Underpinning all of this is a less frequently discussed driver: compute is not just necessary for serving existing customers, but for training the next generation of Claude models. Anthropic has publicly stated its intention to pursue what it calls ASL-4 capabilities—a reference to its internal “AI Safety Levels” framework, in which higher levels represent greater model capability and require correspondingly more rigorous safety measures.

Training at ASL-4 capability levels requires compute at a scale that no existing cluster can provide. The 3.5 gigawatt commitment through Google and Broadcom is widely understood by industry observers as providing the physical substrate for that next training run—positioning Claude 5 (or whatever Anthropic designates its next major model generation) to represent a qualitative leap over current models.

That context matters for understanding why Anthropic is willing to commit to compute infrastructure at infrastructure-of-nations scale, and why Google and Broadcom are willing to provide it. This is not just a capacity deal for an AI API company; it is the physical foundation of what Anthropic believes will be transformative AI systems.

Whether the revenue growth that has driven these commitments continues at the same pace—and whether the compute, when deployed, produces models that justify the investment—are the two questions on which Anthropic’s next chapter will turn.

Anthropic Google Broadcom TPU compute infrastructure AI funding Dario Amodei
Share

Related Stories

OpenAI's Leaked Memo Reveals Cracks in Microsoft Alliance as Amazon's $50B Deal Takes Center Stage

A leaked internal memo from OpenAI Chief Revenue Officer Denise Dresser states that the decade-long Microsoft partnership has 'limited our ability to meet enterprises where they are.' The memo, which simultaneously attacks Anthropic's revenue accounting, signals a historic realignment of AI's enterprise cloud landscape as Amazon's $50 billion investment begins reshaping where the industry's most powerful models get deployed.

5 min read