Anthropic Hits $30B ARR, Surpasses OpenAI as World's Highest-Revenue AI Company
Anthropic has reached $30 billion in annualized revenue, overtaking OpenAI's $25 billion ARR to become the world's highest-earning AI company. The milestone accompanies a massive compute deal with Google and Broadcom for multi-gigawatt TPU capacity coming online in 2027, and signals that both companies are on accelerating paths toward IPOs — even as compute costs threaten to outpace revenue growth.
Anthropic Hits $30B ARR, Surpasses OpenAI as World’s Highest-Revenue AI Company
In a remarkable reversal of the competitive narrative that has defined the AI industry since ChatGPT’s launch, Anthropic has overtaken OpenAI in annualized revenue. The company’s annualized recurring revenue has reached $30 billion, surpassing OpenAI’s $25 billion ARR — making Anthropic, for the moment, the highest-earning AI company in the world. The milestone is accompanied by a sweeping compute deal and growing signs that both companies are accelerating toward public markets.
How Anthropic Got Here
Just 18 months ago, Anthropic’s revenue was a fraction of OpenAI’s. The reversal reflects several compounding factors:
Enterprise adoption of Claude: Claude’s API has become deeply embedded in enterprise workflows across financial services, healthcare, legal, and technology sectors. Claude’s longer context window, more consistent instruction-following, and generally higher scores on safety and reliability benchmarks compared to GPT-4 variants drove significant enterprise migration throughout 2025.
Claude Code’s explosive growth: The launch of Claude Code — Anthropic’s AI coding assistant — proved to be a breakout product. Unlike GitHub Copilot or OpenAI’s competing Codex-based offerings, Claude Code attracted a dedicated community of professional developers and engineering teams with strong word-of-mouth. Seat-based enterprise licensing for Claude Code has been a significant contributor to ARR growth.
Google partnership leverage: Anthropic’s deep commercial relationship with Google — including Claude’s integration into Google Cloud’s Vertex AI platform and Google Workspace — has funneled a substantial stream of enterprise revenue through Google’s sales channels, effectively giving Anthropic a global enterprise sales force it didn’t have to build from scratch.
API pricing efficiency: While OpenAI has been under pressure to reduce API prices to remain competitive with open-source alternatives, Anthropic has maintained relatively higher per-token pricing by emphasizing capability differentiation and reliability — a premium positioning that has held in the enterprise segment.
The Compute Mega-Deal
Alongside the revenue milestone, Anthropic has signed a landmark deal with Google and Broadcom for access to multiple gigawatts of TPU computing capacity coming online starting in 2027. The arrangement represents one of the largest single compute commitments in AI history.
The deal has a dual character. On one side, Google is providing TPU capacity through its infrastructure at preferential rates — an arrangement consistent with Alphabet’s broader $2 billion investment in Anthropic. On the other side, Broadcom is involved in the custom chip architecture: Broadcom has been Google’s primary partner in designing the custom ASIC logic that underpins TPU generations, and the involvement suggests the next-generation TPU hardware will be co-specified with Anthropic’s training and inference workload requirements in mind.
For Anthropic, securing gigawatt-scale compute commitments through 2027 and beyond addresses what has been the company’s most significant operational constraint. Unlike OpenAI, which has exclusive access to Microsoft’s Azure AI infrastructure, Anthropic has had to navigate compute procurement in a competitive market. The Google-Broadcom deal effectively gives Anthropic a guaranteed supply of frontier-grade compute at industrial scale.
The IPO Race
Both companies are now openly pursuing public listings. Sources with knowledge of both processes indicate that October 2026 is being discussed internally as a potential IPO window for one or both companies — though market conditions, regulatory review timelines, and competitive positioning will ultimately determine timing.
The financial disclosures ahead of IPO preparation reveal an industry in a peculiar position: revenues are growing faster than almost any technology sector in history, but costs are growing even faster.
Anthropic’s $30 billion ARR sounds enormous until viewed against its compute expenditure: the company is spending in the range of $7-10 billion annually on training and inference compute, a figure expected to grow significantly as next-generation models require substantially more compute to train. With estimated total operating costs — including personnel, facilities, and safety research — the path to profitability is not imminent.
OpenAI’s finances tell a similar story. Despite $25 billion ARR, the company’s net losses remain substantial, driven by the staggering cost of running GPT-5 and successor models at production scale.
Anthropic Tightens Subscription Access
In a move that signals the company’s focus on higher-margin revenue streams, Anthropic has quietly restricted Claude subscription plans from powering certain third-party agent tools. The change specifically affects tools like OpenClaw, a popular third-party client that routed paid Claude subscriptions through the API — effectively letting users access API-tier capabilities at subscription pricing.
Under the new policy, users relying on such tools must migrate to direct API access (billed per token) or Anthropic’s new “extra usage” tier within the subscription product. The change is framed as a terms-of-service clarification but has the economic effect of pushing heavy API users — the highest-value customers — toward higher-revenue pricing plans.
The move mirrors a similar enforcement action OpenAI took in late 2025, when it cracked down on subscription plan abuse in the API. It reflects a broader maturation of both companies’ monetization strategies: the land-grab phase of discounted access is ending, replaced by a disciplined focus on revenue quality.
What the Numbers Mean for the Industry
Anthropic’s revenue overtaking OpenAI is more than a competitive scorecard update. It signals several things about the state of the AI industry:
Enterprise, not consumer, is the business: The shift in revenue leadership to Anthropic — a company with minimal direct consumer presence compared to OpenAI’s ChatGPT — suggests that enterprise API and seat-based licensing, not consumer subscription fees, is where the real money is in AI.
Capability differentiation still commands premium pricing: Despite the rapid commoditization of AI capabilities through open-source models and low-cost API providers, frontier model vendors still extract meaningful pricing premiums in the enterprise segment. The premium is shrinking, but it hasn’t collapsed.
Compute is existential: Both companies’ financial profiles make clear that access to compute — not talent, not data, not algorithms — is the binding constraint on growth. The companies that secure the best long-term compute arrangements at the best prices will have structural cost advantages that compound over time.
IPO pressure is real: As both companies approach public markets, the pressure to demonstrate a credible path to profitability will intensify. That means both companies will face difficult choices in the next 12-18 months: raise API prices, cut costs, find new revenue streams, or accept that the path to profitability runs through a longer private-company timeline than originally anticipated.
The race between Anthropic and OpenAI is entering a new phase — one measured not just by benchmark scores and model releases, but by revenue per dollar of compute, enterprise retention rates, and ultimately, the balance sheet math that public market investors will scrutinize relentlessly.