Skip to content
FAQ

Pentagon Clears 7 Tech Giants for Classified AI Networks While Freezing Out Anthropic

The U.S. Department of War has granted AI deployment rights on its most sensitive classified networks to seven companies — Microsoft, Amazon, Google, OpenAI, SpaceX, Nvidia, and Reflection — explicitly excluding Anthropic, which was designated a national security supply chain risk in February after refusing to remove restrictions on autonomous weapons and mass surveillance use cases.

5 min read

The U.S. Department of War has formalized what may be the most consequential line drawn in American AI policy: seven technology companies have been cleared to deploy their artificial intelligence systems on the Pentagon’s most sensitive classified networks, while Anthropic — until recently the only AI company with such access — has been explicitly shut out.

The agreements, announced on May 1, 2026, cover deployment on Impact Level 6 and Impact Level 7 network environments — the infrastructure used for classified and top-secret military operations. The cleared companies are Microsoft, Amazon Web Services, Google, OpenAI, SpaceX, Nvidia, and Reflection, a newer AI startup backed by Nvidia. Oracle has also been reported as an eighth signatory by some outlets. Together, these firms will integrate their AI capabilities into the Pentagon’s operational and intelligence systems, spanning computer vision, generative AI decision support, and data operations.

The Company That Got Left Behind

The most striking aspect of the announcement is who is absent. Anthropic was, until February 2026, the only AI company whose services were cleared for use on the Defense Department’s classified networks. That exclusive position — the product of years of cautious engagement with government customers — was revoked in a single stroke when Defense Secretary Pete Hegseth signed an order designating Anthropic a “supply chain risk.”

The designation marked an unprecedented moment in American tech policy. Supply chain risk labels have historically been reserved for foreign adversaries — most notably Chinese companies like Huawei and SMIC that were added to the Entity List during the Trump administration’s first term. Applying the same framework to a San Francisco AI startup founded by former OpenAI safety researchers was, by any measure, extraordinary.

The root of the conflict is straightforward: acceptable use policy. Anthropic’s terms of service prohibited the use of Claude for mass domestic surveillance of American citizens and in fully autonomous weapons systems — AI that selects and engages targets without a human in the decision loop. The Pentagon, under its current leadership, sought to renegotiate those terms, insisting that Anthropic allow military use of Claude “for all lawful purposes” without carveouts. Anthropic refused.

Anthropic’s response was immediate. In March 2026, the company filed suit against the Trump administration seeking to reverse the Pentagon’s blacklisting, arguing the designation was procedurally improper and violated due process. The legal fight has produced its own drama: in April, Anthropic lost an appeals court bid to temporarily block the blacklisting while litigation proceeds, meaning the designation remains in effect as the underlying case works through the courts.

The White House and Pentagon have publicly drifted apart on the issue. Some senior administration officials have privately expressed discomfort with the supply chain risk designation applied to a domestic company, while others in the defense establishment have been supportive of Hegseth’s move. The Pentagon’s chief technology officer confirmed in early May that Anthropic remains blacklisted, characterizing the dispute as “a separate issue” from other AI procurement decisions.

What Access to IL6/IL7 Means

The significance of Impact Level 6 and 7 authorization is hard to overstate for any technology company with government ambitions. IL6 covers classified national security information up to the Secret level; IL7 encompasses Special Access Programs and other top-secret compartmented material. Being cleared for these environments is effectively a prerequisite for deploying AI in serious military applications — targeting assistance, signals intelligence analysis, logistics optimization under contested conditions, battlefield simulation, and more.

For companies like Microsoft (through Azure Government), Amazon Web Services (GovCloud), and Google (through its Federal Cloud), IL6/IL7 authorization builds on existing classified cloud infrastructure. For OpenAI and SpaceX, which have been aggressively expanding their government presence, the agreement signals a deepening relationship with the defense establishment that few in the private sector have matched. Nvidia’s inclusion reflects the centrality of its GPU infrastructure to military AI compute.

Reflection, the least-known of the seven, has gained attention as a startup specifically architected for defense use — built from the ground up to meet government security requirements rather than retrofitting consumer infrastructure.

Scale AI’s Separate $500M Win

Running parallel to the classified network agreements is a separate major defense AI contract. Scale AI, the data infrastructure and model evaluation company backed by Meta, secured a $500 million expanded deal with the Pentagon’s Chief Digital and Artificial Intelligence Office (CDAO) — five times its previous $100 million agreement. The expansion reflects the rapid uptake of Scale AI’s platform for computer vision model development, generative AI decision support, and MLOps across Pentagon components.

Scale AI’s deal structure, an Other Transaction Authority vehicle, was specifically designed to bypass the slow traditional acquisition process and allow any Pentagon component to initiate its own AI project agreements without a separate competitive bid. The $500 million ceiling represents an acknowledgment that demand across the Department has already exceeded what the original contract could accommodate.

The Pentagon’s Strategic Calculus

Pentagon officials framed the seven-company agreements in explicitly competitive terms. One official stated that the Defense Department would “never again” rely on a single AI provider — a direct reference to the concentration risk created by Anthropic’s previous status as the sole classified network-approved vendor. The new strategy establishes redundancy across the major AI platforms while ensuring the military is not dependent on any single company’s continued cooperation.

The framing also reflects a broader Trump-era defense technology doctrine that prioritizes broad commercial AI adoption over the kind of carefully negotiated acceptable use frameworks that characterized the prior administration’s approach. The message to the AI industry is clear: companies that impose restrictions on military use cases will be sidelined.

Implications for the AI Safety Debate

The Anthropic situation has become a flashpoint in the ongoing debate about the relationship between AI safety commitments and commercial viability. Anthropic’s refusal to permit autonomous weapons use was not a private internal policy — it was a public commitment, embedded in its acceptable use policy and central to its brand positioning as a safety-conscious AI developer.

The Pentagon’s response demonstrates that safety constraints, however sincere, are not free. The exclusion from classified network deployment forecloses Anthropic from a significant category of government revenue and from influence over how AI is applied in national security contexts. It also creates a perverse incentive structure: companies willing to accept fewer restrictions on military use are rewarded with the most sensitive access.

Industry observers have noted the irony: at the same moment Anthropic is overtaking OpenAI as the largest revenue-generating AI company in the private sector, it is being systematically excluded from one of the largest government procurement categories by a policy dispute over autonomous weapons. The dual reality — commercial triumph and regulatory exile — has become one of the defining tensions in AI policy in 2026.

The outcome of Anthropic’s lawsuit may ultimately determine whether safety-constrained AI companies can participate in classified defense markets at all, or whether the Pentagon’s “all lawful purposes” standard becomes the de facto industry requirement for government AI contracts.

pentagon military-ai anthropic policy classified national-security dod
Share

Related Stories

US Government Will Test Google, Microsoft, and xAI Models Before Release Under New NIST Agreements

NIST's Center for AI Safety and Innovation has signed pre-deployment testing agreements with Google DeepMind, Microsoft, and xAI — expanding a program that began with OpenAI and Anthropic in 2024. Under the deals, companies hand over unreleased models with reduced safety guardrails so government evaluators can assess national security risks before the public ever sees them.

4 min read

US and China Eye Historic AI Governance Talks at Upcoming Trump-Xi Summit in Beijing

The United States and China are preparing to add formal AI dialogue to the agenda of a mid-May summit in Beijing between President Donald Trump and President Xi Jinping — a potential breakthrough in managing the world's most consequential technological rivalry. Discussions are expected to center on preventing AI from triggering autonomous weapons escalation, curbing nonstate actor misuse of open-source models, and establishing guardrails for AI involvement in nuclear command decisions.

5 min read