Mira Murati's Thinking Machines Lab Signs Multi-Billion Dollar Google Cloud Deal for Nvidia GB300 AI Infrastructure
Thinking Machines Lab, the AI startup founded by former OpenAI CTO Mira Murati just 14 months ago, has signed a multi-billion dollar deal with Google Cloud for Nvidia GB300-powered infrastructure to train and deploy its flagship 'Tinker' model. The deal makes the company — which raised a record $2 billion seed round at a $12 billion valuation — the third frontier AI lab to lock in Google compute capacity this month, after Anthropic and Meta. It underscores how quickly a new tier of AI challengers is consolidating around Google as an alternative to constrained Nvidia hardware supply.
When Mira Murati left OpenAI in September 2024, few predicted that within 14 months she would be running one of the world’s most heavily capitalized AI startups and signing infrastructure agreements measured in the billions of dollars. On April 22, TechCrunch reported that Thinking Machines Lab — the company Murati founded in February 2025 — had signed a new multi-billion dollar deal with Google Cloud, expanding an existing relationship to secure access to Nvidia’s latest GB300 chips for training and deploying its frontier AI platform.
The deal places Thinking Machines alongside Anthropic and Meta as the three frontier AI developers to lock in major compute commitments with Google in the span of a single month — a striking concentration of demand that reflects both Google’s hardware ambitions and the pressure frontier labs face in securing reliable access to next-generation compute.
Who Is Thinking Machines Lab?
Murati served as OpenAI’s Chief Technology Officer from 2018 until her surprise resignation in September 2024, during which she oversaw the development and launch of GPT-4, DALL-E 3, Sora, and the o1 reasoning model series. After leaving, she spent several months in stealth before announcing Thinking Machines Lab in February 2025.
The company’s founding round was immediately exceptional. Thinking Machines raised $2 billion in seed funding at a $12 billion valuation — the largest seed round in venture capital history by a significant margin, and a signal that Murati’s reputation carried enough weight to attract institutional investors at terms normally reserved for late-stage companies with proven revenue.
The investor syndicate for the seed round included some of the most prominent names in technology investment, drawn by Murati’s technical track record and the thesis underpinning the company’s flagship product.
Tinker: Building Custom Frontier Models on Demand
Thinking Machines Lab’s core product is called Tinker, a platform that automates the creation of custom frontier AI models. Rather than selling a single general-purpose model as a product — the approach taken by OpenAI, Anthropic, and Google — Tinker is designed to let enterprises commission and receive models specifically adapted to their own data, domain vocabulary, and performance requirements.
The technical underpinning of Tinker is a heavy reliance on reinforcement learning, the training paradigm that has driven some of the most significant recent breakthroughs at DeepMind, OpenAI’s reasoning lab, and Chinese labs including DeepSeek. Reinforcement learning is computationally expensive and architecturally demanding — it requires infrastructure capable of running tightly coupled feedback loops between model, reward signal, and training environment, often across thousands of chips simultaneously.
Google noted explicitly in announcing the deal that its infrastructure is well-suited to Thinking Machines’ RL workloads. That is not an incidental observation: Google has spent years building specialized inter-chip interconnects and memory pooling capabilities for exactly the kind of distributed RL training that Tinker depends on.
Deal Structure and Strategic Logic
The Google Cloud agreement is valued in the single-digit billions and is not exclusive. Thinking Machines retains the right to use other cloud providers in parallel, giving the startup flexibility as its compute needs evolve and preventing single-vendor lock-in. The deal builds on a cloud relationship that began in 2025, when Thinking Machines first began using Google infrastructure during its initial model training runs.
The infrastructure provided under the new agreement centers on Nvidia’s GB300 chips, hosted through Google Cloud rather than procured directly from Nvidia. That routing matters commercially: it allows Google to serve as both infrastructure vendor and relationship anchor for a startup that will need years of sustained compute access to bring Tinker to maturity.
For Google, the strategic calculus is equally transparent. Signing Thinking Machines — the highest-profile independent frontier AI lab founded in the past 18 months — as an anchor customer before the company has a commercial product in the market gives Google an early foothold in what could become a long and lucrative compute relationship. If Tinker gains commercial traction, the infrastructure requirements will scale dramatically, and the vendor relationship established today will carry enormous inertia.
The pattern is deliberate. Google has spent the past several months building a coalition of frontier AI relationships: Anthropic’s compute agreement, which is expanding to 3.5 gigawatts of capacity in 2027; a multi-billion commitment from Meta; and now Thinking Machines. Combined, these commitments represent a structural challenge to Nvidia’s current dominance of the AI compute market — not by beating Nvidia’s chips on raw specification, but by giving frontier labs a credible, scaled alternative supply chain backed by Google’s financial capacity and infrastructure investment.
The New Class of AI Challengers
Thinking Machines is part of a cohort of high-profile AI labs founded in 2024 and 2025 by veterans of OpenAI, DeepMind, and other frontier organizations. The cohort includes Ilya Sutskever’s Safe Superintelligence and several others, each backed by capital at valuations that would have seemed implausible for pre-revenue companies in any prior era of technology investment.
What unites them is a common bet: that the frontier of AI capability is not yet determined, that current models have exploitable weaknesses or untapped architectural directions, and that a small team of exceptional researchers with sufficient compute can still produce a model competitive with — or superior to — those produced by the largest labs. Tinker’s focus on customization and RL-driven training represents one specific such bet: that enterprises will pay a premium for models that are genuinely adapted to their domain rather than fine-tuned general models wrapped in a thin product layer.
Whether that bet pays off depends on factors that no amount of capital can guarantee: research breakthroughs, execution quality, and whether the market actually values the customization advantage enough to pay for it. But in securing multi-billion dollar infrastructure access just 14 months after founding, Thinking Machines has bought itself substantial time to find out.
For Murati personally, the deal marks a visible milestone in a reinvention that the technology world has watched closely since her departure from OpenAI. She left one of the most powerful positions in AI to build from scratch. Fourteen months later, she is negotiating with Google Cloud at the same table as Anthropic and Meta. Whatever Tinker ultimately becomes, the trajectory has been difficult to dismiss.