Trump White House Eyes Mandatory Pre-Release Reviews for AI Models in Major Policy Reversal
The Trump administration is weighing an executive order that would establish a government working group to vet AI models before public release, a historic reversal after rescinding Biden's AI safety rules in January 2025. The shift was triggered by Anthropic's Mythos model — a cybersecurity AI already deployed by the NSA that the company refused to release publicly due to its offensive capabilities.
In January 2025, one of Donald Trump’s first acts upon returning to the White House was to rescind the Biden administration’s AI executive order — an October 2023 rule that required frontier AI developers to share safety test results with the federal government and directed agencies to establish standards for the technology. The message was clear: Washington would get out of Silicon Valley’s way.
Sixteen months later, the Trump administration is considering doing the opposite.
Senior White House officials have held meetings with representatives from Anthropic, Google, and OpenAI in recent weeks to discuss plans for a new executive order that would insert the federal government directly into the AI model release process, according to reporting by the New York Times confirmed by multiple outlets. The proposal, still in discussion and not finalized, would establish a working group of technology executives and government officials to review new AI models before they reach the public — a gate that no American AI lab has ever been required to pass through.
A White House official told reporters Monday that reports of the potential order should be treated as “speculation” and that any policy announcement would come directly from the president. But the mere fact that briefings occurred — and that all three frontier AI labs were in the room — signals a meaningful shift in the administration’s posture toward AI oversight.
The Catalyst: Anthropic’s Undisclosed Cybersecurity Model
The immediate trigger for the policy reconsideration is Anthropic’s Mythos model, which the company built and then declined to release publicly. Unlike Anthropic’s consumer-facing Claude family, Mythos was developed primarily for cybersecurity applications, with documented capabilities to identify software vulnerabilities, assist in exploit development, and autonomously probe network defenses at a level the company’s own safety teams described as potentially precipitating “a cybersecurity reckoning” if released broadly.
Anthropic made the unusual decision to share Mythos with the National Security Agency, which has incorporated it into classified operations, while withholding it from commercial release. The situation created an anomaly that alarmed officials in both the intelligence community and the executive branch: a private company had built a potentially nation-state-grade cyber capability, deployed it selectively to government clients, and unilaterally decided the rest of the market couldn’t have it.
The episode exposed a regulatory vacuum. Under current law, no federal agency has authority to require AI developers to submit models for review before release. The question now under discussion is whether to create that authority — and if so, how to structure it without creating a bottleneck that America’s geopolitical rivals would exploit.
What the Proposed Review Would Look Like
Details remain sparse and contested, but the executive order under consideration would reportedly create a formal working group with membership drawn from both private-sector technical experts and government agencies including the NSA, CISA, and the newly formed AI Safety Institute. The group would establish criteria for which models require review — almost certainly keyed to capability thresholds rather than covering all AI development — and set a review timeline that labs would need to meet before public deployment.
This is structurally similar to, though narrower than, the Biden administration’s requirement that labs share test results with the government for frontier models above a certain compute threshold. The key distinction under the Trump proposal is discretion: the working group would have the ability to delay or condition a release, not merely receive information about it.
Whether that distinction constitutes overreach or common sense depends heavily on who is being asked. Representatives of OpenAI and Google reportedly engaged constructively in the White House briefings, according to people familiar with the conversations. Anthropic, whose Mythos situation is the most directly relevant, has publicly supported some form of government coordination on model releases, consistent with its safety-focused founding mission.
The Industry’s Complicated Position
The AI industry’s response to potential pre-release review requirements is more nuanced than a simple opposition front. OpenAI CEO Sam Altman has publicly stated that he welcomes government engagement on frontier model oversight, even as OpenAI’s commercial interests favor speed to market. Google DeepMind has invested heavily in safety research and has, at least rhetorically, supported the principle of coordination with governments on powerful model releases.
The concern, voiced more quietly, is about competitive dynamics with China. Any review process that delays American AI labs’ model releases could create windows in which Chinese counterparts — not subject to equivalent constraints — ship capable systems first. DeepSeek’s rapid cadence of model releases, and its demonstrated ability to match or approach frontier performance with significantly less compute, has made this argument more concrete.
The counter-argument, advanced within the administration by officials who have seen Mythos and similar classified AI capabilities, is that the greater risk is an uncoordinated release landscape where a private company can deploy a potential weapon of strategic significance based solely on its own judgment.
Echoes of Nuclear and Biotech Precedent
The analogy most frequently invoked in these discussions is the nuclear and biotechnology review frameworks established in the post-World War II era. The Atomic Energy Act created federal oversight of nuclear technology not because it was uneconomical, but because the downside scenarios of uncoordinated proliferation were deemed unacceptable. Biosafety review requirements for certain pathogen research emerged from similar logic.
AI safety advocates have long argued that frontier AI systems — particularly those with demonstrated ability to assist in designing cyberweapons or biological agents — should be subject to analogous framework. The Mythos situation may have provided the concrete, classified-level evidence that finally moved the administration in that direction.
The difference, critics of the proposal note, is that nuclear and biotech reviews operate in industries with much slower release cycles and clearer physical constraint points. AI development runs on software, can be duplicated instantly, and crosses borders as easily as an API call. A review regime that works for a new reactor design may be structurally unsuited to a world where a new model weights can be released by updating a GitHub repository.
What Happens Next
The White House has not committed to a timeline for a final decision. Congressional interest in AI oversight has grown substantially over the past year, with bipartisan efforts in both chambers producing competing frameworks, none of which has passed. Any executive action the administration takes would exist in tension with — or potentially preempt — legislative approaches.
For the three frontier labs, the practical implication is to prepare compliance infrastructure for the possibility that pre-release review becomes a legal requirement, even while the outcome remains uncertain. Those conversations are reportedly already happening internally at all three companies.
The broader tech industry, which has largely been shielded from ex-ante regulatory requirements, is watching closely. If the administration moves forward with mandatory AI model review, it will establish a precedent for how the world’s most consequential technology is governed — one that other nations will likely use as a reference point for their own frameworks.