OpenAI, Anthropic, and Google have activated the Frontier Model Forum as a live threat-intelligence operation, pooling defenses against an industrial-scale adversarial distillation campaign traced to DeepSeek, Moonshot AI, and MiniMax. Anthropic alone documented 16 million unauthorized extractions via 24,000 fake accounts and has banned all Chinese-controlled companies from Claude.
A bipartisan group of U.S. lawmakers has introduced the MATCH Act, legislation that would ban exports of deep-ultraviolet (DUV) immersion lithography systems and related chipmaking equipment to Huawei, SMIC, CXMT, YMTC, and other Chinese firms. The bill also pressures allied nations — including the Netherlands, Japan, and South Korea — to align their own export controls within 150 days or face U.S. sanctions.
With federal AI legislation stalled, U.S. states have become the de facto regulators of consumer AI. Nebraska's Conversational AI Safety Act passed this week, mandating chatbot disclosures and crisis protocols for minor users. Across the country, a patchwork of state-level bills is creating a fragmented compliance landscape that industry groups warn could harm innovation — while advocates argue it is the only protection consumers have.
Maine's LD 2082, which bans licensed mental health providers from using AI for independent therapeutic decisions, passed both legislative chambers on April 7–8 and heads to the governor. Missouri's HB 2372 passed the full House on April 2 with a $10,000 first-offense penalty clause, and now sits in the Senate. The twin developments signal an accelerating state-level movement to draw hard legal lines around AI in mental healthcare.
OpenAI has published a sweeping 13-page policy document urging governments to prepare for approaching superintelligence by taxing automated labor, creating public wealth funds seeded by AI companies, and piloting a 32-hour workweek with no pay cuts. Released days before a likely IPO roadshow, the proposals position OpenAI as a responsible steward of economic disruption — while critics note the company stands to benefit from the regulatory frameworks it is proposing.
Governor Newsom signed Executive Order N-5-26, requiring AI vendors to disclose safety policies on CSAM, civil rights, and anti-discrimination as conditions for state contracts. The order also grants California the authority to independently overrule federal AI supply-chain risk designations, setting up a direct constitutional clash with the Trump administration.
Utah's regulatory sandbox has approved Legion Health to allow its AI chatbot to autonomously renew prescriptions for 15 psychiatric medications—making Utah the first government in the world to authorize AI for autonomous psychiatric prescribing. The approval comes weeks after a previous Utah AI prescription pilot was successfully jailbroken, raising immediate questions about the clinical safety of moving faster than medical safeguards can keep up.
Three of the world's leading AI labs are formally sharing intelligence through the Frontier Model Forum to detect and block 'adversarial distillation' — a technique where outside labs bombard US AI systems with automated prompts to clone their behavior. Anthropic alone documented 16 million extraction attempts linked to Chinese AI firms, escalating the fight over intellectual property at the AI frontier.
Governor Gavin Newsom signed Executive Order N-5-26, establishing first-in-the-nation AI procurement standards for California state contracts and explicitly decoupling the state's supply chain risk assessments from federal designations. The move escalates the battle between California and the Trump administration over who gets to govern American AI.
Bipartisan legislation introduced April 2 would require US allies — including the Netherlands and Japan — to restrict deep ultraviolet lithography equipment exports to China, and threatens extraterritorial US jurisdiction if they don't comply. The bill specifically targets ASML and Tokyo Electron equipment still flowing to Huawei and SMIC.
Governor Kathy Hochul signed New York's Responsible AI Safety and Education Act into law on March 27, establishing the most stringent AI safety framework enacted by any U.S. state. The law takes effect January 1, 2027, but the Trump administration's DOJ AI Litigation Task Force is already positioned to challenge it — setting up a defining constitutional battle over who governs AI in America.
The DOJ's AI Litigation Task Force, operational since January 10, is now actively challenging state AI statutes that conflict with the Trump administration's December 2025 executive order preempting local regulation. With over 20 states having enacted comprehensive AI laws, the outcome of this federal-state standoff will define who governs AI in America for the next decade.
Three landmark cases are working through US courts right now. The outcomes will determine whether AI companies owe billions in licensing fees — or whether training on public data remains fair use.
Brussels isn't bluffing. Three companies received formal notices this week under the AI Act's transparency provisions. The real question is whether this helps or hurts European AI competitiveness.
New export restrictions just dropped, and NVIDIA is caught in the middle. They're losing $15B+ in China revenue while Beijing accelerates domestic chip development. Nobody wins this game.