New York's RAISE Act Becomes Law, Igniting Federal-State AI Regulation War
Governor Kathy Hochul signed New York's Responsible AI Safety and Education Act into law on March 27, establishing the most stringent AI safety framework enacted by any U.S. state. The law takes effect January 1, 2027, but the Trump administration's DOJ AI Litigation Task Force is already positioned to challenge it — setting up a defining constitutional battle over who governs AI in America.
When President Trump signed an executive order in December 2025 directing federal agencies to challenge state AI regulations, most observers expected the order to act as a chilling signal — a legal warning shot that would push state legislators toward caution. New York’s response was to pass one of the most stringent AI safety laws in the country anyway.
On March 27, 2026, Governor Kathy Hochul signed the final version of the Responsible AI Safety and Education Act — the RAISE Act — into law. The signing concluded a legislative journey that began with Hochul vetoing an earlier version in late 2024 over concerns that it was too broad. The law she signed in March reflects months of negotiation between the governor’s office, industry lobbyists, and civil society advocates, resulting in a more precisely scoped but still substantively demanding framework.
The RAISE Act takes effect January 1, 2027. The Trump administration’s Department of Justice has established an AI Litigation Task Force specifically to challenge state AI laws. The collision is now a matter of timing.
What the RAISE Act Actually Requires
The law targets what it calls “large frontier AI developers” — a category defined by compute thresholds and model capability assessments, designed to capture companies like Anthropic, OpenAI, Google, and Meta while exempting smaller developers and open-source contributors. For covered entities, the law imposes three core obligations.
Safety protocol publication. Covered developers must create, maintain, and publicly disclose documentation of their safety frameworks. This includes descriptions of how the organization identifies risks from their models, what internal evaluation procedures they use before deployment, and how safety incidents are escalated internally. The requirement is modeled loosely on the EU AI Act’s system transparency provisions, though New York’s disclosure requirements are in some respects more granular.
72-hour incident reporting. When a covered developer determines that a “critical safety incident” has occurred, they must report it to a new oversight office within 72 hours. The law defines critical safety incidents to include outputs that cause significant harm, evidence of unexpected capability gains, and security breaches that expose model weights or training data. The 72-hour window is notably shorter than California’s 15-day reporting requirement under its own AI safety framework — making New York’s standard the most demanding in the country.
Oversight office jurisdiction. The law creates an oversight body within the New York Department of Financial Services, which will have authority to assess covered developers, request additional documentation, and refer violations for enforcement. The financial services agency was chosen as the host partly for its existing technical capacity around algorithmic systems and partly for its demonstrated willingness to act aggressively — a history that reassures safety advocates and makes industry nervous.
The Political Context
The RAISE Act’s passage is politically remarkable. Hochul vetoed a related bill in December 2024, citing concerns that it would chill AI investment in New York and burden developers with unworkable compliance obligations. The legislature responded by passing a revised version with a chapter amendment process, which required the governor to negotiate specific modifications before she would sign. The final law that emerged on March 27 represents what insiders describe as a genuine compromise: tighter jurisdictional scope, clearer definitions of covered entities, and more explicit safe harbors for research and open-source development.
That Hochul signed it at all — less than four months after Trump’s December EO and with the DOJ AI Litigation Task Force explicitly in place — reflects the political dynamics of New York state politics more than any federal-state comity. The state’s congressional delegation has been vocal about AI risks; consumer advocacy groups have run sustained public pressure campaigns; and the tech industry, which lobbied heavily against earlier versions, concluded that a negotiated RAISE Act was preferable to a ballot initiative or more sweeping legislation that might follow a veto.
For the AI industry, the law’s real financial significance lies in the incident reporting requirement. The 72-hour window creates an obligation to determine whether an incident is “critical” — and then to publicly disclose it — on a timeline that many safety engineers consider operationally challenging. A frontier model deployment serving hundreds of millions of users generates numerous potential anomalies daily; building the triage infrastructure to distinguish reportable incidents from noise within 72 hours is a non-trivial engineering and legal compliance challenge.
The Federal Challenge
The Trump administration’s December 2025 executive order directed the DOJ to establish an “AI Litigation Task Force” to challenge state laws that the federal government considers burdensome to AI development. The task force was announced formally on January 9, 2026. Its mandate is to identify and challenge state AI regulations that conflict with or undermine federal AI policy — a broadly written mandate that covers preemption claims under the Supremacy Clause as well as First Amendment challenges to compelled disclosure requirements.
Legal analysts are divided on how strong a preemption case the federal government has. Federal AI policy under the current administration is deliberately permissive — the guiding principle has been that overly restrictive regulation would cede AI leadership to China. But preemption doctrine generally requires either explicit congressional action or a demonstrated conflict between state and federal law. The federal government has passed no comprehensive AI legislation; the December EO is a policy statement, not a statute. Courts have historically been skeptical of implied preemption claims when Congress has not occupied the regulatory field.
The more likely angle for a federal challenge is a First Amendment argument against the RAISE Act’s compelled disclosure requirements — the argument that forcing companies to publish internal safety documentation constitutes compelled speech. This is a genuinely contested area of constitutional law, and several tech companies have quietly funded legal research exploring exactly this theory.
States vs. Washington: The New Frontier
New York’s RAISE Act is the most prominent but not the only state-level AI safety law that has been enacted or is being considered in the first quarter of 2026. California passed its own AI safety framework earlier in 2025; Colorado has enacted an algorithmic transparency law for consequential automated decisions; and at least eight other states have active AI legislation in progress.
This proliferation of state laws was the proximate cause of the Trump administration’s pre-emption strategy. Industry groups, particularly those representing frontier AI developers, have long argued that a patchwork of 50 different state regulatory regimes would be unworkable — each with different definitions, thresholds, reporting timelines, and enforcement mechanisms. Their preferred outcome is federal preemption that establishes a single, lighter-touch national standard, leaving no room for more demanding state laws to operate.
Civil society groups and state attorneys general see the same patchwork and draw the opposite conclusion: that the diversity of approaches represents regulatory experimentation that is ultimately valuable, and that federal preemption would lock in a permissive standard before the risks of frontier AI systems are adequately understood.
The RAISE Act’s January 2027 effective date means there is roughly nine months before any enforcement can begin — which is also roughly nine months for a federal challenge to work through the courts. Whether the DOJ files suit, and when, will be the most consequential policy decision in U.S. AI governance since the December EO. New York has made clear that it intends to defend the law. The battle is now fully joined.