Trump Executive Order Activates DOJ Task Force to Override State AI Laws
The DOJ's AI Litigation Task Force, operational since January 10, is now actively challenging state AI statutes that conflict with the Trump administration's December 2025 executive order preempting local regulation. With over 20 states having enacted comprehensive AI laws, the outcome of this federal-state standoff will define who governs AI in America for the next decade.
When President Trump signed an executive order in December 2025 directing federal agencies to preempt state AI laws that conflicted with national AI policy, legal scholars disagreed about whether it would have teeth. Executive orders cannot override state laws passed under constitutional authority — only Congress can do that through the Supremacy Clause, or the courts through preemption doctrine.
What the EO could do — and has done — is direct the Department of Justice to challenge those laws in court, on the theory that they conflict with federal interests in ways that trigger implied preemption. The DOJ AI Litigation Task Force became operational on January 10, 2026, and as of early April, it is actively pursuing litigation against several state AI statutes across the country.
The patchwork is about to be tested.
The Stakes: 20+ State AI Laws in Effect
More than twenty states have enacted comprehensive AI-related statutes that took effect as of January 1, 2026. These laws range in scope and substance: Colorado’s SB 205 imposes algorithmic impact assessment requirements on high-risk AI decision systems. Texas has enacted disclosure requirements for AI-generated political content. California has a suite of bills covering everything from synthetic media to automated hiring tools to healthcare AI decision-making.
Taken together, these laws represent the most expansive state-level AI regulatory framework in any country. The EU AI Act is comprehensive but jurisdictionally unified; American AI companies face a patchwork of varying, sometimes conflicting requirements across states — with different definitions of “high-risk AI,” different disclosure formats, different audit requirements, and different enforcement mechanisms.
The Trump EO’s argument is that this patchwork constitutes an unconstitutional burden on interstate commerce and conflicts with the federal government’s exclusive authority over national security-related AI applications. The DOJ Litigation Task Force is the enforcement mechanism for that argument.
What the Commerce Department Published
On March 11, 2026, the Commerce Department published its evaluation of “burdensome state AI laws” — the result of a mandate in the December EO to catalog the regulatory landscape. The evaluation identifies specific statutes in at least eight states as candidates for federal challenge based on three criteria:
- Extraterritorial reach: Laws that attempt to regulate AI systems or developers located outside the state’s borders.
- Conflict with federal AI standards: Laws that impose requirements that directly contradict emerging federal guidelines for AI in healthcare, financial services, or national security contexts.
- Barriers to federal AI procurement: Laws that would prevent state government agencies from using AI tools acquired under federal contracts.
The FTC also issued its AI policy statement on March 11, clarifying that it considers unfair or deceptive AI practices to fall within its existing Section 5 authority — a move designed to establish a federal regulatory floor that could support preemption arguments for duplicative state consumer protection laws.
The Funding Leverage Mechanism
Beyond litigation, the EO includes a less-discussed but potentially more powerful tool: it conditions certain federal broadband and technology funding on states pausing enforcement of AI statutes that conflict with federal priorities.
This funding conditionality mechanism mirrors approaches used in other federal-state regulatory contexts — most notably the use of highway funding to enforce the national minimum drinking age in the 1980s, which the Supreme Court upheld in South Dakota v. Dole (1987). If the same logic applies here, states that want access to federal AI infrastructure grants from the CHIPS Act and broadband programs may have to agree to pause enforcement of specific AI laws during the litigation period.
Legal scholars are divided on whether the funding mechanism is constitutionally permissible in this context. “The Dole framework requires that the condition be related to the federal interest being served,” notes a Ropes & Gray analysis published in March. “Whether ‘pause your algorithmic hiring bias law’ is sufficiently related to ‘broadband deployment funding’ is not obvious.”
Carve-Outs That Reveal Priorities
The EO’s explicit carve-outs are as revealing as its targets. The following categories of state AI law are explicitly excluded from preemption efforts:
- Child safety laws: State laws requiring age verification, content moderation, or parental consent for AI systems used by minors.
- State procurement rules: States retain authority to set their own standards for AI systems used in state government operations.
- Data center permitting: States retain authority over land use, zoning, and utility regulations for AI infrastructure — a significant carve-out given the enormous amount of data center construction planned across the country.
The carve-outs suggest the EO is targeting state laws that create friction for private sector AI deployment, not laws that protect state sovereignty or vulnerable populations. That framing will be central to the constitutional challenges the litigation task force will face.
Industry Response: Complicated
The tech industry’s response to the EO has been more complicated than a simple “industry supports federal preemption” narrative would suggest. Large AI companies — OpenAI, Google, Meta, Anthropic — have generally supported federal-level AI regulation as preferable to a state patchwork, and some have lobbied explicitly for preemption of conflicting state laws.
But the same companies have expressed concern about the EO’s breadth. Federal preemption that eliminates meaningful state AI oversight while Congress has not yet passed comprehensive federal AI legislation could create a regulatory vacuum — a period in which neither federal nor state protections are effectively in force. That outcome serves neither industry interests nor public interests.
Civil liberties organizations are less ambivalent. The ACLU, Electronic Frontier Foundation, and several state attorneys general have already announced intentions to intervene in DOJ litigation challenges, arguing that state AI laws represent the exercise of constitutional police powers that federal executive orders cannot override.
What Comes Next
The DOJ litigation is unlikely to produce final judicial resolution before 2027 or 2028, given typical federal court timelines. In the interim, the regulatory landscape for AI deployment in the US will be defined by a combination of:
- DOJ preliminary injunctions in states where the litigation task force can argue immediate and irreparable harm from enforcement
- FTC enforcement actions that establish federal precedents on specific AI practices
- Congressional action — or inaction — on the AI legislation proposals pending in both chambers
- State compliance decisions: some states may voluntarily pause enforcement of challenged laws rather than defend them with limited resources against the DOJ
The December 2025 EO is the most aggressive assertion of federal authority over AI regulation in U.S. history. The DOJ’s task force activation turns it from a policy statement into an active legal contest. Whatever the courts ultimately decide, the fight itself will force clarity on constitutional questions that were, until very recently, entirely theoretical.