Skip to content
FAQ

Japan's Ruling Party Moves to Criminalize AI Deepfakes, With Anime and Manga in the Crosshairs

Japan's Liberal Democratic Party has drafted legislation seeking criminal penalties for repeat offenders who use generative AI to create unauthorized images of anime and manga characters, or to produce non-consensual explicit deepfakes of real people — marking a significant hardening of the country's otherwise innovation-first AI policy and the first move toward criminal liability in the Indo-Pacific region for specific AI content misuse.

5 min read

Japan has long positioned itself as among the world’s most AI-friendly jurisdictions — a country that explicitly welcomed AI training on copyrighted works, resisted the punitive impulses of the EU AI Act, and passed its first national AI law in May 2025 deliberately without financial penalties. That posture is now under revision.

Japan’s ruling Liberal Democratic Party (LDP) has drafted new legislation that would for the first time impose criminal penalties on repeat offenders who misuse generative AI, with a particular focus on two categories of harm that have become acute in Japan’s cultural and social landscape: the unauthorized generation of anime and manga character images, and the creation of non-consensual explicit deepfakes featuring real people. If enacted, Japan would become the first major democratic nation in the Indo-Pacific region to criminalize specific categories of AI-generated content misuse.

From Innovation-First to Enforcement-Ready

Japan’s existing AI framework — the Act on Promotion of Research and Development, and Utilization of Artificial Intelligence-related Technology, which became law in May 2025 — was drafted with a deliberately light regulatory touch. It empowers the government to investigate serious incidents and issue guidance, but includes no monetary penalties and no criminal liability. The philosophy was intentional: Japanese policymakers were wary of hampering the nation’s AI development aspirations with the kind of compliance overhead that drew sustained criticism of the EU’s approach.

But eighteen months of real-world AI deployment have shifted the political calculus. Japan has witnessed a significant rise in AI-generated content that violates intellectual property rights and personal dignity, from synthetic images of living celebrities in compromising contexts to AI-generated reproductions of iconic anime characters used in unauthorized merchandise and content distribution.

The LDP proposal, reported by UPI on April 23, calls for penalties on businesses that repeatedly ignore government requests to take down infringing or harmful AI-generated content. It also advocates criminal consequences for individuals who are serial offenders — those who, after receiving cease-and-desist notices, continue to generate and distribute prohibited content.

Anime: Japan’s Cultural and Commercial Flashpoint

The focus on anime and manga is not incidental. Japan’s anime industry generated approximately ¥2.7 trillion ($18 billion) in revenue in 2024, with character intellectual property representing a substantial portion of that value. The emergence of AI tools capable of generating near-perfect imitations of copyrighted character designs — at scale and without licensing fees — has alarmed studios, original artists, and the creative unions that represent them.

The problem is compounded by geography. Many of the AI services generating unauthorized anime-style content are operated by companies based outside Japan — in the United States, Europe, and China — making domestic enforcement orders practically difficult to execute against the actual operators. The LDP proposal explicitly addresses this, calling on the Japanese government to adopt a more active posture toward overseas operators that repeatedly produce infringing content, including through international legal cooperation frameworks.

Several major Japanese studios, including those behind globally recognized franchises, have filed formal complaints with Japan’s Agency for Cultural Affairs over AI-generated reproductions of their characters. The complaints argue that even though existing Japanese copyright law carves out broad exceptions for AI training, the downstream commercial use of AI-generated character images falls outside those exceptions and constitutes infringement — a legal interpretation that remains contested and largely untested in court.

Non-Consensual Deepfakes: A Growing Social Crisis

Alongside the intellectual property concerns sits a more personal dimension of harm. Japan has witnessed a significant increase in AI-generated explicit images featuring identifiable real people — overwhelmingly women, including celebrities, social media influencers, and private individuals. These images, often distributed across social media platforms and encrypted messaging applications, have caused documented psychological harm to victims and created serious challenges for law enforcement.

Under Japan’s current legal framework, the existence of an AI-generated image is not itself a criminal act unless it meets specific definitions of defamation or privacy violation — definitions that have proved difficult to apply to synthetic media that depicts fictional scenarios with real-appearing subjects. The LDP’s proposed amendments would create a more direct legal pathway for prosecuting those who generate and distribute non-consensual explicit deepfakes, particularly repeat offenders who continue after receiving formal warnings.

A government panel established in April 2026 is simultaneously reviewing how existing tort law should be interpreted and applied to AI-generated harms, indicating that the legislative push is accompanied by a parallel effort to build civil liability mechanisms. OECD data shows Japan has seen a disproportionate number of AI-generated sexual deepfake incidents relative to its population, contributing to the political urgency behind the LDP draft.

Balancing Innovation Against Harm

The LDP’s proposal must navigate a genuine tension. Japan’s approach to AI development has been a competitive differentiator: the country’s flexible copyright rules have made it an attractive location for AI companies to train models, and several major Japanese corporations — Sony, SoftBank, Fujitsu, NTT — have made large bets on domestic AI capability development.

There is a legitimate concern among Japanese AI researchers and industry groups that introducing criminal penalties — even targeted at clear cases of abuse — could create chilling effects on legitimate AI development and creative applications. The question of where “unauthorized anime character reproduction” ends and “AI-generated art in the style of anime” begins is not legally trivial; that boundary matters enormously to the thousands of independent artists and developers who use AI tools in their creative workflows.

The LDP’s current draft attempts to thread this needle by focusing penalties on repeat offenders and commercial-scale abuse rather than individual creative uses. The proposed framework targets “businesses that ignore requests” and individual “repeat offenders” — language designed to avoid criminalizing one-time or accidental violations while creating clear deterrence against systematic abuse.

Whether that framing survives parliamentary debate is uncertain. Japan’s legislative process will expose the draft to significant legal scrutiny, and the difficulty of defining “repeat offender” and “commercial scale” precisely enough to withstand constitutional challenge may require substantial revision before the bill progresses.

A Regional Policy Signal

Japan’s moves carry weight well beyond its borders. As one of Asia’s most influential technology policy voices, Japan’s regulatory evolution is closely watched by South Korea, Taiwan, and Southeast Asian nations that have similarly adopted permissive frameworks for AI development — favoring innovation promotion over prescriptive regulation.

If the LDP legislation passes in its current form, it signals that even the most AI-friendly democratic governments are concluding that voluntary self-regulation and light-touch oversight are insufficient to address the specific, documented harms from non-consensual deepfakes and copyright infringement at scale.

For global AI companies, the Japan draft carries practical implications regardless of its final form. The call for enforcement against overseas operators — backed by the threat of international legal cooperation mechanisms — suggests that geographic distance will become less reliable as a shield against content-related liability in major markets. Companies whose services can generate unauthorized anime character images or non-consensual explicit content may face increasing pressure to implement proactive content controls for the Japanese market, or risk being targeted under the new framework if it passes.

The global AI governance picture remains far from settled. Japan’s proposed shift from promotion to enforcement — however targeted and narrowly scoped — represents a meaningful data point in the evolving international consensus about what AI companies owe the societies whose cultural output and personal dignity their systems can so easily reproduce, distort, or exploit.

Japan AI regulation deepfakes anime copyright LDP generative AI policy
Share

Related Stories

China Bars AI Startups from US Capital Without Approval, Citing Meta's Manus Acquisition

China's National Development and Reform Commission has instructed leading AI startups including Moonshot AI and StepFun to reject American investment without explicit government sign-off, Bloomberg reported Thursday. The move is a direct response to Meta's $2 billion acquisition of Manus — a deal that exposed how Chinese AI startups were restructuring offshore to bypass Beijing's oversight — and marks a significant escalation in the AI dimensions of the US-China technology war.

5 min read

The AI Divide: PwC Study Finds 20% of Companies Are Capturing 74% of AI's Economic Value

A new PwC survey of 1,217 senior executives across 25 sectors reveals that the economic benefits of AI are concentrating rapidly in a small group of 'leaders' generating 7.2 times more AI-driven financial impact than the average competitor. The decisive differentiator is not how much a company spends on AI — it is whether AI is used for business reinvention and growth, or merely for efficiency and cost reduction.

6 min read