Japan’s emerging framework for responsible AI: legislation, guidelines and guidance

Wednesday 16 July 2025

Kensuke Inoue
Anderson Mori & Tomotsune, Tokyo
kensuke.inoue@amt-law.com

Chika Kamata
Smartnews Inc, Tokyo

Introduction

As artificial intelligence (AI) reshapes economies and societies, Japan is adopting a distinctive regulatory approach: layered, flexible and non-binding. Instead of rigid regulation, Japan relies on three pillars: (1) the non-binding Act on the Promotion of Research, Development and Utilisation of Artificial Intelligence-Related Technologies (the ‘AI Promotion Act’), which sets the strategic direction; (2) the 2024 AI Business Operator Guidelines (the ‘AI Guidelines for Business’), which translate principles into practice; and (3) guidance on the interpretation of existing statutes, such as the Copyright Act and the Act on the Protection of Personal Information (APPI) to address issues raised by generative AI. This integrated framework balances innovation with accountability and international compatibility.

The AI Promotion Act: a strategic framework for AI promotion

On 28 May 2025, the AI Promotion Act was passed by the Diet (Japan’s national legislature) into law. The title of Japan’s first legislative blueprint for AI accurately reflects its aims to promote the development and use of AI, while addressing the potential risks posed by various tools.

The AI Promotion Act defines AI broadly as technologies simulating human cognition and positions it as a driver of public welfare and economic resilience. The AI Promotion Act rests on four core principles: viewing AI as a strategic asset, promoting industrial use, mitigating the risks through transparency and actively contributing to international AI norms. It adopts a multi-stakeholder model, assigning roles to central and local governments, academia, businesses and citizens, with an emphasis on coordination.

The implementation measures include support for AI research and development, computing infrastructure, talent development and public literacy. Oversight will be led by the Prime Minister through the AI Strategy Headquarters and operationalised via a Basic Plan for AI.

A defining feature of the AI Promotion Act is its non-binding character. It does not create enforceable rights or duties, but serves as a soft-law instrument, encouraging voluntary compliance through political signalling and administrative coordination. This mirrors Japan’s broader administrative tradition, according to which ‘regulation by guidance’ is often preferred over punitive enforcement.

Nonetheless, the AI Promotion Act contains several provisions, particularly those relating to transparency, safety and international alignment, that may foreshadow future regulation. Ministries may eventually issue binding rules or ministerial ordinances pursuant to delegated authority, particularly in high-risk sectors, such as healthcare or critical infrastructure.

The AI Guidelines for Business: soft law in practice

Published by the Ministry of Economy, Trade and Industry (METI) and the Ministry of Internal Affairs and Communications (MIC) in April 2024, the AI Guidelines for Business respond to global discussions, including those that took place during the 2023 G7 Hiroshima AI Summit. They consolidate earlier efforts and provide a practical framework for ethical AI operations. Within a year of its release, the AI Guidelines for Business were updated to version 1.01 on 28 March 2025.

Structured according to three tiers, the Guidelines define the foundational values (human dignity, inclusion, sustainability), set out ten cross-sector principles (eg, fairness, privacy, safety) and offer useful tools, such as checklists and case studies. They tailor the relevant expectations to developers, providers and users, promoting internal accountability.[1][2]

A key feature of the Guidelines is the call for executive-level responsibility. Ethical AI should be governed like cybersecurity: embedded into organisational governance, not isolated within technical departments. Annex 2 encourages senior management oversight.

Although non-binding, the Guidelines may influence court interpretations, administrative guidance, procurement policies and environmental, social and governance (ESG) assessments. By aligning with international principles, they also enhance Japan’s position in global AI discussions. For businesses, they serve as a benchmark for good governance, particularly as AI becomes increasingly subject to scrutiny by regulators, investors and the public.

There are also several voluntary industry groups focused on AI and AI governance, wherein companies share knowledge and discuss best practices, with broad corporate participation. Many of them are increasingly establishing their own guidelines, and cross-company collaboration is also advancing. The Guidelines may further influence such self-regulatory organisations and the standards they develop.

Japanese copyright law and generative AI

In May 2024, the Agency for Cultural Affairs, which is the agency responsible for copyright law and policy, published a document entitled the ‘General Understanding on AI and Copyright in Japan’ to clarify the interpretation of relevant copyright statutes in the context of generative AI.

Predating the explosion of generative AI, Japan’s Copyright Act had introduced limitations to copyright to address the training of AI systems in 2020. Article 30-4 permits non-expressive uses, such as data analysis for training purposes, without the need for authorisation by the rightsholder, so long as the output does not replicate the expressive works. According to the General Understanding, the use of copyrighted works in this way would generally fall under this exception. [3]

However, if models are fine tuned (eg, via LoRA) or trained on databases that imitate specific styles, the exemption no longer applies. Expressive intent invalidates the safe harbour.

Output liability depends on two factors: substantial similarity and dependency (ikyo-sei). If a generated output resembles a known work and that work was included in the training data, dependency is presumed. In this situation, developers must prove otherwise or face legal exposure. Technical safeguards, like filters and style constraints, are essential in this context.

Responsibility in this regard is shared. While users are primarily liable, developers and service providers may be implicated if they trained their models on copyrighted works or facilitated expressive replication.[4] Possible remedies include the removal of datasets, injunctions and, in extreme cases, model destruction under Article 112.

Japanese data privacy and Generative AI

Generative AI has also raised red flags in regard to Japan’s APPI. In June 2023, the Personal Information Protection Commission (PPC) warned that inputting personal data into AI systems may violate the law if that data is retained or reused for training purposes.

Key legal anchors include Article 18 (limiting use to disclosed purposes) and Article 27 (requiring consent for third-party provision of data). Even casual user inputs can be deemed personal data. Businesses must adopt policies, obtain informed consent and carefully vet vendor terms.

In the same month, the PPC issued a formal warning to OpenAI, citing transparency and safeguard failures. While not legally binding, such warnings establish compliance baselines. Developers must now disclose their data handling practices clearly, prevent unauthorised data collection and offer user consent opt outs for data reuse.

However, the PPC has yet to issue any further public warnings or take any specific enforcement actions. This is in contrast with other jurisdictions that have issued injunctions or other sanctions to AI developers and service providers.

At the same time, as part of the triennial review of Japan’s APPI, the PPC is considering a series of regulatory updates in response to the evolving data landscape. One notable aspect of the proposed revisions is the potential relaxation of the consent requirements for the provision of personal data to third parties, as well as the acquisition of publicly available sensitive personal information provided that specific conditions are met. These conditions include ensuring that the data is used exclusively for the creation of statistical information, with the term the ‘creation of statistical information’ being interpreted broadly to encompass AI development activities. This approach aligns closely with Japan’s broader policy objective of fostering AI innovation, while maintaining robust safeguards.

Conclusion

Japan’s emerging AI governance framework blends non-binding strategy, ethical implementation and legal adaptation. The AI Promotion Act provides a national vision, the AI Guidelines for Business aim to operationalise the relevant measures and the existing laws are evolving to address the unique challenges posed by generative AI. While soft in form, the framework’s influence is significant. It invites businesses to move beyond compliance toward proactive, values-based leadership. In contrast to prescriptive models elsewhere, Japan offers a flexible, trust-oriented path, one that prioritises global alignment and shared responsibility in the age of AI.

 

[1] The Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry, Outline of AI Guidelines for Business version 1.1 page 7, https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20240419_15.pdf last accessed on 10 July 2025.

[2] Ibid, page 11.

[3] Legal Subcommittee under the Copyright Subdivision of the Cultural Council, General Understanding on AI and Copyright in Japan, page 5, https://www.bunka.go.jp/english/policy/copyright/pdf/94055801_01.pdf last accessed on 10 July 2025.

[4] Ibid, page 13