Google Signs Classified Pentagon AI Deal Over Fierce Employee Backlash
Google confirmed on April 28 that it has signed a classified agreement with the US Department of Defense allowing its Gemini AI systems to be used for sensitive military operations under terms permitting 'any lawful government purpose.' The deal was announced one day after more than 580 Google employees — including senior leaders from DeepMind — signed an open letter urging CEO Sundar Pichai to refuse. Anthropic reportedly declined a similar arrangement.
On Monday afternoon, Google formalized what its own employees had spent the weekend desperately trying to prevent.
At approximately 4 p.m. Pacific time on April 28, Google confirmed it had signed a classified agreement with the US Department of Defense, granting the Pentagon access to its Gemini AI models for use in sensitive military operations. The terms, as reported by Bloomberg, allow the military to use Google’s AI for “any lawful government purpose” — language broad enough to encompass mission planning, intelligence analysis, and potentially weapons targeting. The agreement was signed one day after 580 Google employees, including more than 20 directors, senior directors, and vice presidents, signed an open letter urging CEO Sundar Pichai to refuse.
The deal marks the most significant escalation of Google’s relationship with the US military since the 2018 Project Maven crisis — and it arrives in a geopolitical environment that has fundamentally shifted the calculus for AI companies caught between national security imperatives and employee ethics.
What the Agreement Actually Covers
The contract places Google’s AI inside classified networks — air-gapped systems physically isolated from the public internet — which sit at the center of the military’s most sensitive operations. Gaining access to classified environments marks a qualitative shift from the commercial or public-sector AI work that Google has done historically: it means the company’s models can be used in contexts where Google itself cannot monitor, audit, or modify their behavior in real time.
The agreement includes contractual language intended to prevent use of the AI for “domestic mass surveillance” or for “autonomous weapons without human oversight.” However, as critics inside the company have noted, once an AI system is deployed inside a classified network, Google has no technical means to enforce those boundaries. The company can define them in principle; it cannot veto how the technology is actually used.
This opacity is precisely what drove the employee opposition. The open letter, which circulated over the weekend and gathered signatures from staff across Google DeepMind, Google Cloud, and other divisions, warned that classified military AI work could cause “irreparable damage to Google’s reputation, business, and role in the world.” The letter argued that on air-gapped systems, “trust us” becomes the only guardrail — and that the company cannot meaningfully uphold its AI principles without visibility into deployment.
Andreas Kirsch, a research scientist at Google DeepMind who signed the letter openly, told Business Insider he was “incredibly ashamed” when the deal was confirmed, saying he had woken to a “worst-case version” of what employees had feared. Two-thirds of the letter’s signatories agreed to be named; the remaining third requested anonymity over concerns about retaliation.
The Anthropic Context
The deal’s significance is sharpened by a piece of context that TechCrunch reported separately: Anthropic, Google’s largest AI competitor in the frontier model space (and a company that receives significant Google investment), had reportedly declined a similar arrangement with the Pentagon.
Anthropic’s refusal — if confirmed — would represent a meaningful divergence in how two of the most prominent AI safety-focused companies are navigating the national security landscape. Anthropic’s Constitutional AI approach and its public emphasis on responsible development have been central to its positioning; a refusal to enter classified military AI work would be consistent with that brand. Google’s decision to proceed, by contrast, signals that the competitive pressure to serve US government contracts — and the revenue those contracts represent — has overridden internal and external safety concerns.
The comparison is likely to follow Google for years. When Anthropic declined, the Pentagon came to Google. Google said yes.
Project Maven’s Long Shadow
The current backlash echoes, but also differs from, the 2018 Project Maven episode that defined a generation of tech-military relations. In 2018, roughly 4,000 Google employees signed a petition opposing a contract to use AI to analyze drone footage for the Pentagon. Several senior researchers resigned. Google ultimately chose not to renew the contract.
This time, the employee opposition is smaller in absolute numbers but more senior in composition: the participation of directors and vice presidents from Google DeepMind — the company’s flagship AI research division — signals that the dissent reaches into the organization’s technical leadership rather than being confined to rank-and-file engineers.
The outcome, however, is reversed. In 2018, employee pressure succeeded in forcing a policy reversal. In 2026, the deal was signed.
The difference reflects a shifted environment. Since 2018, the competitive landscape for government AI contracts has intensified dramatically. Microsoft has secured massive DoD relationships through its Azure Government cloud and its OpenAI partnership. Amazon Web Services operates Top Secret cloud infrastructure for the intelligence community. The calculation for Google leadership in 2026 is not whether to engage with national security customers — it is whether falling further behind Microsoft in government AI would be more damaging than the employee relations and reputational costs of engagement.
A Quiet Exit From Drone Swarms
The Pentagon deal arrived alongside a separate, quieter disclosure: Google has exited a $100 million competition to develop autonomous drone swarms for the US military, according to The Next Web. The decision was not formally announced and appears designed to reduce the contrast between Google’s classified AI engagement and its avoidance of autonomous weapons programs — both of which are now happening simultaneously.
The juxtaposition is instructive. Google is willing to put its AI inside classified military networks for general-purpose operational use. It is not willing, or not yet willing, to attach that AI directly to autonomous weapons platforms. Where exactly the line sits — and whether the classified agreement’s “human oversight” language is enforceable in practice — is the question that 580 Google employees, including some of its most prominent AI researchers, have explicitly raised without receiving a satisfying answer.
The Broader Industry Reckoning
The Google Pentagon deal is not occurring in isolation. The US military has been systematically expanding its AI partnerships across the frontier lab ecosystem, and the pressure on companies to participate is driven not only by commercial incentives but by explicit government policy. The Trump administration’s AI executive orders have emphasized national competitiveness and deemphasized safety-focused regulation, creating a policy environment in which refusal to serve national security customers carries political as well as commercial costs.
For employees at AI companies who joined believing their work would be governed by explicit ethical constraints, Google’s decision on April 28 is a stark signal: the constraints are negotiable when the national security imperative is strong enough, and the company’s leadership will make that determination without requiring consensus from the people building the technology.
Whether that signal drives talent departures — as Project Maven did in 2018 — or whether the current generation of AI researchers concludes that influence from inside is preferable to exit, will shape the workforce dynamics at Google and its competitors for the remainder of the decade.