Skip to content
FAQ

Google Signs Classified Pentagon AI Deal After Anthropic's Refusal, Sparking Employee Revolt

Google has granted the U.S. Department of Defense access to its Gemini AI for classified networks under an 'any lawful purpose' clause — the same terms Anthropic refused, leading to the DoD designating the startup a 'supply chain risk.' Over 600 Google employees have signed an open letter urging CEO Sundar Pichai to reverse course.

5 min read

Google has signed a classified agreement with the United States Department of Defense granting the Pentagon access to its Gemini AI models for use on air-gapped, classified networks under a sweeping “any lawful government purpose” clause — the exact terms that Anthropic refused to accept months ago, triggering a remarkable chain of events that has now cleaved the AI industry into two camps: those willing to hand Washington the keys, and those who are not.

The Deal and What It Permits

The agreement allows the Pentagon to deploy Google’s Gemini models across classified infrastructure for, as the contract language states, “any lawful government purpose.” In practice, this means the DoD can run Gemini on air-gapped networks without Google having visibility into how queries are formed, what outputs are generated, or what operational decisions are made using those outputs.

The deal does include a safeguards clause: “The parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control.” But critics — including many of Google’s own engineers — note that on a classified, air-gapped network, there is no technical mechanism for Google to enforce that language. The company can write it into a contract, but it cannot audit compliance.

Google’s move places it alongside OpenAI and Elon Musk’s xAI as classified AI suppliers to the U.S. military — a trio now firmly on one side of what is becoming the defining ethical fault line in the industry.

Why Anthropic Said No

The Pentagon’s request for “any lawful purpose” access was not new. The DoD approached Anthropic with essentially the same terms earlier this year. Anthropic, led by CEO Dario Amodei, declined. The company argued that such open-ended language could authorize the government to use its models for domestic mass surveillance programs and for autonomous weapons systems that select and engage targets without meaningful human oversight — two use cases Anthropic said were incompatible with its Constitutional AI safety principles.

The refusal was not quiet. Anthropic made its position public, framing it as a principled stand: the company would support national security applications where appropriate guardrails were in place, but would not sign a blank check.

The Department of Defense responded by designating Anthropic a “supply-chain risk” — a classification normally reserved for foreign adversaries or companies with demonstrated ties to hostile state actors. The designation effectively bars federal agencies from procuring Anthropic’s models and allows other contractors to be warned away from integrating them. Anthropic sued, and a federal judge last month issued an injunction blocking enforcement of the designation while the case proceeds — but the litigation continues, and the reputational and commercial damage from being labeled a supply-chain risk by the U.S. government is difficult to quantify.

The Google Employee Revolt

Inside Google, the deal has provoked the most visible internal rebellion since the company’s controversial Project Maven AI drone work in 2018 — which resulted in a mass employee walkout and eventually Google’s withdrawal from that contract.

More than 600 Google employees — a disproportionate number from the company’s DeepMind AI research division — signed an open letter addressed to CEO Sundar Pichai urging him to reject classified military AI work. The letter described potential uses as “inhumane” and called on leadership to follow Anthropic’s example, demanding that any military use of Google AI be subject to public, auditable safeguards.

Pichai announced the deal anyway.

The employee letter notes a particular concern: unlike enterprise cloud deployments where Google retains some visibility into how its products are used, classified networks are specifically designed to prevent that transparency. “We cannot see what is done with our work,” the letter reportedly argues. “That is not a feature. It is an abdication of responsibility.”

A Strategic Calculation

Google’s decision is not hard to understand from a business perspective. Pentagon AI contracts represent billions of dollars annually, and the DoD has made clear it intends to accelerate AI integration across intelligence analysis, logistics, communications, and — more controversially — operational planning. Companies that establish early classified-network deployments stand to build deep, durable integrations that competitors will struggle to displace.

The company is also acutely aware of the competitive optics. OpenAI signed its classified AI deal with the DoD shortly after Anthropic’s refusal. xAI followed. Watching two major competitors secure classified government work while maintaining market-leader positioning made Google’s calculus straightforward, even if the ethical questions are not.

In a notable side development, The Next Web reported that Google quietly withdrew from a separate $100 million DoD drone swarm contest around the same time the classified deal was announced — suggesting the company may be drawing selective lines about which specific military applications it will support, even as it opens the broader classified door.

The Fracture in the AI Industry

This episode marks a genuine inflection point in how the AI industry relates to government power. For years, major AI labs operated under a shared — if often implicit — assumption that they would set ethical limits on how their technology could be used, even by state actors. Anthropic’s refusal was the most explicit public test of that assumption, and the DoD’s response was swift and punitive.

The outcome has now created a powerful deterrent against other labs holding firm. If refusing the Pentagon’s terms results in a “supply-chain risk” designation that chills federal procurement for years, the rational commercial calculation for any venture-backed AI company is to comply.

Advocates for AI accountability argue this is precisely why statutory guardrails — not voluntary corporate ethics policies — are necessary. Without enforceable rules about what government agencies can ask AI companies to do, and what AI companies are permitted to do in classified environments, the current race to secure DoD contracts will systematically erode whatever ethical limits labs currently maintain.

What Comes Next

The Anthropic lawsuit is likely to set important precedents about government procurement power and the limits of “supply-chain risk” designations. Legal scholars have noted that applying that designation to a domestic company for refusing commercial terms is legally unusual and potentially overreaching.

For Google, the near-term calculus appears to be paying off: the company enters earnings season on April 29 with a significant classified government AI portfolio alongside its commercial Gemini deployments. But the longer-term reputational and talent costs — particularly as it competes with Anthropic and others for the top AI researchers who are increasingly outspoken about where their work goes — remain to be seen.

The 600 employees who signed the letter are still at their desks. For now.

Google Pentagon DoD Gemini AI policy Anthropic military AI classified
Share

Related Stories

Trump Executive Order Activates DOJ Task Force to Override State AI Laws

The DOJ's AI Litigation Task Force, operational since January 10, is now actively challenging state AI statutes that conflict with the Trump administration's December 2025 executive order preempting local regulation. With over 20 states having enacted comprehensive AI laws, the outcome of this federal-state standoff will define who governs AI in America for the next decade.

5 min read

Anthropic's AI Finds Thousands of Zero-Days—Then Launches a $100M Defense Fund Instead of Releasing It

Anthropic's Claude Mythos Preview autonomously identified thousands of previously unknown vulnerabilities in every major OS and browser, including bugs hidden for 27 years. Deeming the model too dangerous to release publicly, Anthropic instead launched Project Glasswing: a $100 million defensive initiative with 11 tech titans to patch critical software before adversaries can exploit similar capabilities.

5 min read

Washington's AI Power Struggle: Federal Preemption vs. State Autonomy Reaches a Boiling Point

The White House's National Policy Framework for AI, released in March 2026, recommends that Congress preempt state AI laws deemed to impose 'undue burdens.' Democrats have responded with the GUARDRAILS Act, which would block federal override. With 25 AI laws already passed in 2026 and 19 states enacting new rules in a single two-week period, the battle over who governs American AI has never been more intense.

6 min read