California Becomes First State to Mandate AI Safety Disclosures for Government Contracts
Governor Newsom signed Executive Order N-5-26, requiring AI vendors to disclose safety policies on CSAM, civil rights, and anti-discrimination as conditions for state contracts. The order also grants California the authority to independently overrule federal AI supply-chain risk designations, setting up a direct constitutional clash with the Trump administration.
California has fired the opening shot in what promises to be a prolonged federal-state battle over AI governance authority. Governor Gavin Newsom signed Executive Order N-5-26 on March 30, making California the first US state to impose mandatory AI safety disclosure requirements as a condition for vendors seeking state government contracts — and explicitly reserving the right to overrule the federal government on which AI companies the state can do business with.
The timing is pointed. President Trump’s administration has systematically dismantled the Biden-era AI safety framework, rescinding Executive Order 14110, withdrawing the United States from the Global AI Safety Partnership, and defunding the AI Safety Institute within NIST. California’s order lands as a direct repudiation of that direction — and as a statement that the most economically significant US state intends to set its own AI governance standards regardless of federal posture.
What the Order Requires
The executive order establishes several concrete obligations for AI vendors seeking California state contracts:
Safety policy disclosure: Companies must publicly document their policies on preventing child sexual abuse material (CSAM) generation, compliance with civil rights statutes, anti-discrimination in model outputs, and transparency about training data provenance. These disclosures must be updated annually and made available to state procurement officers.
Watermarking standards: The order instructs California’s technology agencies to adopt AI output watermarking best practices across all state-procured generative AI tools. This applies to text, image, and audio generation systems used in government workflows.
Federal risk designation independence: Most significantly, the order directs California’s Chief Information Security Officer (CISO) to independently review any AI supply-chain risk designations issued by the federal government. The CISO retains authority to overrule federal determinations and permit California to contract with AI vendors that Washington has flagged as security risks — or conversely, to block vendors that federal agencies have cleared.
Public AI assistant: The order mandates the development of a publicly accessible government AI assistant to help California residents navigate state services, with strict requirements on data minimization and user privacy.
The Federal Collision Course
The supply-chain risk provision is the most legally contentious element of the order. The Trump administration has been expanding the use of entity lists and supply-chain risk frameworks to restrict which technology vendors can receive government contracts — a power it has wielded against both Chinese firms and, in some cases, US companies it views as insufficiently aligned with administration priorities.
California’s order asserts that state procurement decisions are a matter of state sovereignty and that the federal government cannot compel California to adopt its vendor risk determinations. Legal scholars are divided on the strength of this position. Federal contracting law and national security statutes give Washington broad authority over supply chains for federally funded programs. But California’s state budget — at roughly $320 billion annually — dwarfs many countries’ total government spending, and the state sources billions in AI and technology services funded entirely by state revenues.
“California is essentially saying: if you want to sell AI to the largest state economy in the country, you play by our rules — not Washington’s,” said one technology policy attorney familiar with the order. “That’s a significant market signal even if the legal position is ultimately tested in court.”
Implications for AI Vendors
For the major AI labs, the order creates a new compliance obligation and a potential wedge between federal and state business. Anthropic, which has significant Department of Defense contracts, may face a scenario where federal agencies flag its models for security review while California actively seeks to contract with it. OpenAI, Google, and Microsoft all have substantial California state government relationships that would now come with enhanced disclosure requirements.
Smaller vendors face a different calculus. The disclosure requirements — particularly around training data provenance and civil rights compliance — impose documentation burdens that enterprise-scale companies can absorb but may challenge smaller AI startups. Industry groups have already begun lobbying Newsom’s office for a tiered compliance framework based on contract value.
The watermarking requirement is drawing particular interest from AI detection companies. California mandating watermarking best practices across all state AI procurement effectively creates a ready market for watermark verification tools and establishes a template that other states may follow.
California as AI Regulatory Trendsetter
This is not California’s first move to establish AI standards that outpace federal action. The state’s Consumer Privacy Act (CCPA) effectively shaped US data privacy norms for years before any federal privacy legislation materialized. California legislators are also advancing SB 1047’s successor bills, which would impose liability on AI developers for harms from foundation models.
Whether California can successfully maintain AI governance autonomy in the face of federal pushback is a live question. The Trump administration has indicated it views state AI regulations as barriers to a unified national AI strategy. The Department of Justice is reportedly evaluating whether California’s supply-chain risk independence provision conflicts with federal statutes.
But for now, California has staked out a clear position: AI safety is not negotiable, even when Washington disagrees about what it means. For the 39 million Californians who interact with state services — and for the hundreds of AI companies headquartered in the Bay Area and Los Angeles — the order represents a substantial shift in the ground rules of government AI deployment.
What Comes Next
Observers are watching whether other large states follow California’s lead. New York, Illinois, and Colorado all have active AI governance legislation in various stages of development. A handful of state procurement officers from these jurisdictions have reportedly requested briefings from California’s technology agency on the implementation details of N-5-26.
At the federal level, the order is likely to accelerate efforts to establish preemption doctrine around AI regulation — the legal mechanism by which federal law supersedes conflicting state rules. The AI industry’s national trade associations have long sought federal preemption to avoid a patchwork of fifty different state AI regimes. California’s executive order makes that patchwork significantly more consequential, and the fight over preemption more urgent.
For the broader AI ecosystem, the California order is a reminder that in the absence of coherent federal AI policy, the regulatory vacuum gets filled — by states, by courts, or by industry self-governance. The Frontier Model Forum’s anti-distillation alliance and California’s procurement order are both, in their different ways, responses to the same underlying instability. The rules of AI are still being written, and increasingly, they are being written at the state level first.