Skip to content
FAQ

US and China Eye Historic AI Governance Talks at Upcoming Trump-Xi Summit in Beijing

The United States and China are preparing to add formal AI dialogue to the agenda of a mid-May summit in Beijing between President Donald Trump and President Xi Jinping — a potential breakthrough in managing the world's most consequential technological rivalry. Discussions are expected to center on preventing AI from triggering autonomous weapons escalation, curbing nonstate actor misuse of open-source models, and establishing guardrails for AI involvement in nuclear command decisions.

5 min read

The United States and China are edging toward a historic milestone in AI diplomacy. As President Donald Trump prepares for a May 14–15 summit with President Xi Jinping in Beijing, officials from both governments are reportedly working to include formal AI governance on the agenda — a development that would represent the first substantive bilateral AI dialogue under the Trump administration and potentially the most consequential step yet toward managing the world’s most dangerous technological competition.

What’s on the Table

According to multiple reports from early May 2026, the proposed talks would focus on several specific danger zones in the AI race, rather than attempting broad philosophical agreement about AI development.

The primary areas under discussion include: preventing AI systems from autonomously triggering or accelerating military escalation; establishing shared guardrails to ensure AI does not assume control of nuclear launch decisions; and coordinating approaches to limit the ability of nonstate actors — including terrorist organizations and criminal networks — to exploit advanced open-source AI models for mass-casualty attacks or critical infrastructure disruption.

The focus on specific, bounded risks reflects the pragmatic approach both governments appear to favor. The US and China are far from consensus on broader AI governance questions — from data regulation to export controls to definitions of “AI safety” — but the catastrophic-risk scenarios (autonomous weapons going rogue, AI-enabled bioweapons synthesis) represent areas where both governments have obvious shared interest in establishing some form of communication, even if deep trust remains absent.

Treasury Secretary Bessent Leads the US Side

On the American side, Treasury Secretary Scott Bessent is reportedly leading the AI track within the summit preparation, according to people familiar with the matter. The assignment of Treasury — rather than the State Department or the Office of Science and Technology Policy — to the AI diplomacy portfolio is notable. It suggests the administration views AI primarily through an economic competitiveness lens: the risk to be managed is not just military escalation but the economic and strategic consequences of an uncontrolled AI race that neither power has the capacity to win alone.

China has not yet publicly designated its counterpart. That gap in public signaling may be a negotiating dynamic — keeping flexibility over whether to engage at the technical level, the economic level, or the foreign-policy level, depending on how the American side frames the conversation.

Historical Context: A Long Road to This Moment

Formal US-China AI dialogue is not entirely new. In November 2023, at the Biden-Xi summit in Woodside, California, the two presidents agreed to establish ongoing government-to-government communication on AI risk. The resulting talks, however, produced limited concrete progress. China placed its foreign ministry — rather than technical AI experts or defense officials — in charge of negotiations, a framing that critics argued was designed to generate diplomatic optics without committing to operationally meaningful agreements.

Under the Trump administration, those talks were largely suspended as the broader US-China technology relationship deteriorated through the chip export control escalations of 2024 and early 2025. The potential resumption of dialogue at the leader level represents a significant course correction — or at minimum, an acknowledgment by both sides that even adversaries benefit from some shared communication about catastrophic risks.

The model being discussed draws loosely on Cold War precedents: the US-Soviet hotline established after the Cuban Missile Crisis, and the Strategic Arms Limitation Talks that defined the bipolar nuclear order of the 1970s. Neither precedent translates cleanly to the AI context, but both offer institutional blueprints for managing competition while maintaining communication at moments of peak danger.

Why Now: The Autonomous Weapons Pressure Point

The urgency driving the May discussions has a specific technical catalyst. Across both the US and Chinese military establishments, autonomous AI systems are being integrated into weapons platforms, logistics operations, and intelligence analysis at an accelerating rate. Neither country has established clear doctrine for how AI-augmented systems should behave in escalating conflict scenarios — particularly in the gray zone between conventional deterrence and open warfare.

Experts on both sides have flagged the risk of an “AI-triggered incident”: a scenario in which autonomous systems on both sides interpret a low-level provocation as requiring a higher-level military response, creating an escalation loop that human decision-makers cannot easily interrupt. The Taiwan Strait, the South China Sea, and the Korean Peninsula are all theaters where such an incident is considered plausible within the current decade.

Establishing AI communication protocols — even informal red-lines that both sides agree not to cross — would reduce the risk of an autonomous miscalculation spiraling into a full military confrontation. The Trump administration, despite its hawkish stance on China economic competition, appears to have concluded that this specific risk category is worth managing through direct engagement.

Open-Source AI: A Shared Concern

A secondary focus of the proposed talks is the proliferation of powerful open-source AI models and their availability to nonstate actors. Both the US and Chinese governments share concerns — from different directions — about advanced AI capabilities reaching actors outside their control.

For the United States, the concern is primarily about adversary-state-adjacent actors using open-source models to develop bioweapons, coordinate attacks on critical infrastructure, or conduct large-scale influence operations. For China, the concern includes domestic stability: powerful open-source models that circumvent Chinese content controls represent a political risk that Beijing takes seriously regardless of international context.

This creates an unusual area of potential cooperation. Neither government wants frontier AI capabilities — the kind that can synthesize novel pathogens or identify zero-day vulnerabilities at scale — to proliferate beyond their ability to monitor and contain. A modest agreement on information sharing about nonstate AI misuse could be politically achievable even in an otherwise adversarial relationship.

What the Summit Could Produce

Expectations should remain calibrated. Even optimistic scenarios for the Beijing summit do not include a formal AI governance treaty or a binding framework. The most realistic positive outcome is an agreement to establish a working-level channel — separate from the broader diplomatic relationship — dedicated to AI risk communication, analogous to the US-Soviet military hotline but scoped to the AI domain.

Such a channel would not slow AI development in either country, and it would not resolve the fundamental competition over AI leadership. But it would provide a mechanism for communication during the specific high-stakes scenarios — an autonomous incident in a contested waterway, a suspected AI-enabled cyberattack — where the absence of communication is most dangerous.

Taiwan’s Strategic Position

For Taiwan, the prospect of US-China AI talks carries particular weight. Taiwan sits at the intersection of the two countries’ AI rivalry — as the primary manufacturer of the advanced semiconductors that power AI systems in both nations — and as the most likely flashpoint for any military confrontation triggered by escalating AI-enabled military capability.

Any formal US-China AI dialogue that includes protocols around Taiwan Strait scenarios would directly affect Taiwan’s security environment. Taiwanese officials and analysts will be watching the Beijing summit closely, seeking signals about whether the Trump administration’s engagement with China on AI risks is being conducted in ways that strengthen or weaken deterrence around the island. The outcome of the proposed AI track may prove as consequential for Taiwan’s near-term security as any of the other agenda items the two presidents will discuss.

US-China AI policy diplomacy Trump Xi Jinping AI governance autonomous weapons
Share

Related Stories

Big Tech's AI Safety Convergence: Industry Drafts Voluntary 'AI Constitution' as CAISI Seals Testing Deals

In overlapping moves that signal a new phase of AI governance, Google DeepMind, Microsoft, and xAI have signed formal pre-deployment security testing agreements with the Commerce Department's CAISI, while Apple, Google, and Microsoft are separately coordinating what the media has dubbed an 'AI Constitution' — a multi-layered voluntary safety framework the companies say is preferable to waiting for Congress to impose one.

5 min read

US Government Will Test Google, Microsoft, and xAI Models Before Release Under New NIST Agreements

NIST's Center for AI Safety and Innovation has signed pre-deployment testing agreements with Google DeepMind, Microsoft, and xAI — expanding a program that began with OpenAI and Anthropic in 2024. Under the deals, companies hand over unreleased models with reduced safety guardrails so government evaluators can assess national security risks before the public ever sees them.

4 min read