Skip to content
FAQ

California Breaks from Washington: Newsom Signs Historic AI Procurement Executive Order

Governor Gavin Newsom signed Executive Order N-5-26, establishing first-in-the-nation AI procurement standards for California state contracts and explicitly decoupling the state's supply chain risk assessments from federal designations. The move escalates the battle between California and the Trump administration over who gets to govern American AI.

5 min read

The collision between California and the federal government over AI governance reached a new intensity on March 30, when Governor Gavin Newsom signed Executive Order N-5-26 — a sweeping directive that establishes the nation’s first comprehensive AI procurement standards for state government contracting and, in a pointed rebuke to Washington, explicitly instructs state agencies to conduct their own independent AI risk assessments rather than deferring to federal designations.

The order is simultaneously a governance document and a political declaration. It sets concrete requirements for how AI systems may be used in California government operations, creates new protections against discriminatory and harmful AI applications in public services, and — most consequentially — severs the link between California’s supply chain decisions and the Pentagon’s own AI blacklisting process.

The Anthropic Flashpoint

To understand why the supply chain language landed like a bomb in Washington, you need to understand what happened in February 2026. The Department of Defense, using authorities asserted under the administration’s AI executive order from late 2025, designated Anthropic as a supply chain risk — a move that effectively barred the San Francisco AI company from numerous federal contracting opportunities and sent shockwaves through the AI investment community.

The DOD’s stated rationale was that Anthropic had refused to grant defense department officials override rights for autonomous weapons use cases — a refusal Anthropic characterized as a principled red line consistent with its responsible AI commitments and its published guidance on acceptable use.

A federal district court subsequently enjoined the DOD’s Anthropic designation, finding that the administration had likely overstepped its statutory authority. But the episode made clear that the federal government was prepared to use procurement power as a tool to extract compliance from AI companies on sensitive national security use cases — a precedent that alarmed California officials, AI industry executives, and civil liberties advocates alike.

Newsom’s executive order responds to that precedent directly. It instructs the California Department of Technology and related agencies to conduct their own independent assessments of AI supply chain risk, applying California-specific criteria focused on civil rights compliance, consumer protection, and public accountability — rather than automatically adopting federal risk designations that may reflect national security calculations unrelated to those values.

What the Order Actually Requires

Beyond the supply chain independence provisions, EO N-5-26 contains a set of substantive AI governance requirements that will apply to any company seeking a California government technology contract:

Mandatory AI use disclosure. Vendors must proactively disclose how AI systems are being used in products or services delivered to California agencies, including whether AI is being used in ways that affect state residents.

Anti-discrimination and civil rights standards. Companies must demonstrate that their AI systems do not produce outputs that result in illegal discrimination, civil rights violations, or harmful bias against protected categories. This is not a general aspiration — it is a contractual requirement.

Synthetic media watermarking. The order directs the California Department of Technology to establish best practices for AI watermarking of synthetically generated media — the first such state-level requirement in the country. Vendors working in content generation for state government will be required to comply once those standards are finalized.

Expanded GenAI in public services. The order also directs state agencies to accelerate the use of generative AI to improve public services, including the deployment of a new life-event AI navigator that will guide California residents through complex government processes such as benefits enrollment, permit applications, and emergency assistance.

Independent risk framework. The California Department of Technology is tasked with developing and maintaining an AI risk framework for state procurement that will be updated on a rolling basis — explicitly designed to be independent from, and potentially inconsistent with, federal frameworks.

The Federalism Stakes

Analysts who study the intersection of technology law and federalism describe EO N-5-26 as the most significant state-level AI governance action in the current cycle — more consequential, in some respects, than the New York RAISE Act or the various state-level AI disclosure bills that have proliferated in recent months.

The reason is market power. California’s state government is one of the largest technology procurement markets in the world; by some estimates, the state spends over $15 billion annually on technology goods and services. When California sets procurement requirements, it does not merely affect California — it establishes de facto standards that vendors must meet if they want access to that market, and vendors rarely build two separate versions of their compliance programs.

This is the same dynamic that has made California’s environmental and consumer protection standards effectively national in scope for decades. The state’s size means that compliance with California rules is often cheaper than building a California-specific version of a product, so companies comply everywhere.

Legal scholars are divided on whether the federal government can preempt California’s AI procurement standards. The administration has signaled interest in using the president’s executive order on AI — which asserts a policy of limiting “patchwork” state AI regulation — as a basis for challenging state procurement requirements. California’s lawyers argue that the state has broad constitutional authority to set conditions on its own spending, and that federal preemption would require an act of Congress that hasn’t occurred.

A Calculated Political Escalation

It would be naive to read EO N-5-26 as purely a governance document. Newsom has positioned himself as the most visible and combative Democratic counterweight to the Trump administration across a range of policy fronts, and AI is now firmly on that list.

The order was timed and framed explicitly in contrast to federal AI policy. The official announcement, published by the Governor’s office, was titled “As Trump Rolls Back Protections, Governor Newsom Signs First-of-Its-Kind Executive Order to Strengthen AI Protections and Responsible Use.” The framing leaves nothing to interpretation.

For the AI industry, the political theater is less important than the practical implications. Companies like Anthropic, Google, Microsoft, and OpenAI — all of which have significant California state government business or ambitions — are now operating in an environment where state and federal AI governance frameworks are not merely different but potentially contradictory. Compliance officers at major AI vendors are already mapping the gaps.

What Comes Next

The immediate regulatory calendar will determine how much teeth EO N-5-26 actually develops. The California Department of Technology has been given 90 days to publish an initial AI risk framework for procurement; the quality and specificity of that framework will determine whether the order functions as a genuine governance mechanism or a political statement with limited practical effect.

The synthetic media watermarking standards process is expected to take six to nine months and will likely draw heavily on work already done by the Content Authenticity Initiative and C2PA (Coalition for Content Provenance and Authenticity), in which California-based companies including Adobe and Google have been active participants.

For the federal-state governance battle, the next major flashpoint will likely come when the federal administration attempts a formal preemption action — either through executive order, FTC rulemaking, or congressional pressure. California has indicated it will litigate any such attempt aggressively.

In the interim, EO N-5-26 stands as the clearest signal yet that, in the absence of federal AI legislation, the states — and California in particular — intend to fill the governance vacuum on their own terms.

California Gavin Newsom AI policy AI procurement Anthropic AI governance federal vs state
Share

Related Stories

New York's RAISE Act Becomes Law, Igniting Federal-State AI Regulation War

Governor Kathy Hochul signed New York's Responsible AI Safety and Education Act into law on March 27, establishing the most stringent AI safety framework enacted by any U.S. state. The law takes effect January 1, 2027, but the Trump administration's DOJ AI Litigation Task Force is already positioned to challenge it — setting up a defining constitutional battle over who governs AI in America.

5 min read

Trump Executive Order Activates DOJ Task Force to Override State AI Laws

The DOJ's AI Litigation Task Force, operational since January 10, is now actively challenging state AI statutes that conflict with the Trump administration's December 2025 executive order preempting local regulation. With over 20 states having enacted comprehensive AI laws, the outcome of this federal-state standoff will define who governs AI in America for the next decade.

5 min read