Japan's Diet enacted the Basic Act on the Promotion of Development and Utilization of Artificial Intelligence Technology in May 2024, establishing a principles-based national framework for AI. It does not mirror the EU AI Act's risk-tier structure or its enforcement architecture. For global operators deploying AI agents in Japan or processing data belonging to Japanese residents, understanding what the framework does and does not require is the starting point for a defensible governance posture.

Key takeaways

  • Japan's AI Promotion Act is a framework law, not a regulation with direct enforcement against private operators. The binding layer comes from sector-specific METI and FSA guidelines and from the APPI framework administered by the Personal Information Protection Commission.
  • The Act establishes seven core principles: human-centricity, safety, fairness, accountability, privacy protection, security, and transparency. These map reasonably well onto ISO/IEC 42001:2023 management system requirements.
  • Japan's AI Safety Institute (AISI), established February 2024 under AIST/METI, conducts model evaluations and participates in international AI safety standard-setting alongside UK and US counterparts.
  • Sector-specific guidelines from METI (Business Guidelines for Provision and Utilization of Generative AI, April 2024), the FSA, and the MHLW carry stronger practical weight for operators than the Act itself.
  • Global operators subject to both EU AI Act and Japan's framework should build compliance to the EU standard. Japan's principles map into the EU governance artefacts with limited additional documentation work.

Structure of the Act

The Basic Act on the Promotion of Development and Utilization of Artificial Intelligence Technology was enacted by the Japanese Diet on 24 May 2024 and came into force in June 2024. It sits in the tradition of Japan's technology promotion legislation rather than in the tradition of European regulatory law. The Act creates a national framework for government AI policy, designates responsibilities across ministries, and articulates the principles that should guide AI development and use. It does not create a prohibition list, a mandatory registration system, or an administrative penalty structure directed at private companies.

The Act designates the national government as the primary responsible party for implementing AI policy and requires the Cabinet to adopt a Basic Plan for AI. The Basic Plan is the operational document: it sets targets, allocates ministry responsibilities, and establishes the policy initiatives that translate the Act's principles into government actions. The most recent Basic Plan, updated following the Act's passage, includes directives for METI, the Ministry of Internal Affairs and Communications (MIC), and the Ministry of Health, Labour and Welfare (MHLW) to develop sector-specific guidance.

For private operators, the Act creates a normative environment rather than a compliance checklist. Ministries use its principles as the basis for their sectoral guidance, and courts or regulators may reference its framework when evaluating whether an operator's conduct met an expected standard. The practical compliance question is therefore not "does my organisation comply with the Act" but "does my governance programme align with the principles and guidelines that ministries are issuing under the Act's framework."

The seven core principles

The Act codifies seven principles for AI development and utilization. These built on and superseded the April 2019 Social Principles of Human-Centric AI published by the Cabinet Office's Integrated Innovation Strategy Promotion Council. Understanding each principle helps operators map their existing governance against the Japanese framework.

The first principle is human-centricity: AI must be used in ways that respect fundamental human rights and promote human dignity. The second is safety: AI systems must not endanger the life, body, or property of individuals. The third is fairness and non-discrimination: AI must not produce outputs that unjustly disadvantage individuals based on personal attributes. The fourth is accountability: developers and deployers must be prepared to explain AI system behaviour and outputs, and to accept responsibility for harm caused. The fifth is privacy protection: AI must handle personal information in compliance with the APPI and related data protection norms. The sixth is security: AI systems must be designed and operated with resilience against cyberattacks and unauthorised access. The seventh is innovation promotion: the framework should not unduly constrain the development and deployment of beneficial AI applications.

The seventh principle is a deliberate counterweight to the first six. Japan's regulatory approach reflects a government view that excessive restriction would disadvantage Japanese companies relative to US and Chinese competitors. This distinguishes Japan from the EU, where the precautionary principle carries more regulatory weight.

Japan AISI and model safety evaluation

Japan's AI Safety Institute was established in February 2024 within AIST (the National Institute of Advanced Industrial Science and Technology), which sits under METI. Japan AISI joins the UK AI Safety Institute, the US AI Safety Institute at NIST, and the institutions formed under the Bletchley Declaration's international framework for frontier AI safety evaluation.

Japan AISI's primary function is to evaluate large-scale AI models for safety properties, with a particular focus on models capable of generating hazardous content or enabling cyberattacks. It publishes evaluation reports and methodology documents, and coordinates its findings with international counterparts through the Seoul and Bletchley process. For operators deploying frontier models in Japan, AISI evaluation findings carry reputational and practical weight: a negative evaluation from AISI affects the model provider's position with Japanese government clients and large corporate deployers.

For most enterprise deployers of commercial AI agents, direct AISI engagement is not a near-term concern. The more relevant function of AISI is its participation in developing evaluation standards that eventually feed into METI operational guidelines and procurement requirements. Operators maintaining ISO/IEC 42001 management systems and conducting regular red-team testing of their AI deployments are already aligned with the governance expectations AISI's methodology reflects.

Sector-specific guidelines: where the practical obligations sit

The most operationally significant documents for global operators are not the Act itself but the sector guidelines published by METI, the FSA, and the MHLW.

METI published the AI Business Guidelines (Generative AI Edition) in April 2024. These are voluntary but represent the reference document that Japanese courts and regulators will apply when evaluating enterprise AI governance. The guidelines distinguish between AI developers, AI providers, and AI users (roughly analogous to the EU AI Act's developers, providers, and deployers). For each category, the guidelines specify expected governance practices including risk assessment, documentation, transparency to end users, and incident response.

The Financial Services Agency has issued AI-related supervisory guidance that builds on both the METI Business Guidelines and Japan's Banking Act framework. For financial services operators deploying AI agents in credit decisioning, investment advice, or insurance underwriting in Japan, the FSA guidance creates de facto mandatory governance requirements, since supervisory examination uses the guidance as a standard against which actual practice is measured. The guidance aligns with the Basel Committee on Banking Supervision's principles for operational resilience and the FSB's work on AI and machine learning in financial services.

The MHLW has issued healthcare AI guidance covering clinical decision support, imaging analysis, and pharmaceutical applications. As with the FSA, these create strong de facto obligations for operators in the healthcare sector.

Operators in sectors not covered by specific ministerial guidance face a more diffuse standard: the METI Business Guidelines, the APPI framework, and the Act's seven principles together define the expected governance posture.

The APPI dimension. The Act on Protection of Personal Information applies to any organisation that handles the personal information of Japanese residents, regardless of where the organisation is established. The PPC's 2023 AI guidance clarifies that automated processing of personal information using AI, including profiling, recommendation generation, and agent-driven decisions, is subject to APPI's purpose limitation, accuracy, and security management principles. An EU-based operator deploying an AI agent that processes personal information of Japanese customers needs APPI compliance in addition to GDPR compliance. The two regimes are broadly compatible but not identical: notably, APPI's rules on third-party provision of personal information have extraterritorial effect and require documented transfer agreements for cross-border data flows.

Comparison with the EU AI Act

The contrast with Regulation (EU) 2024/1689 is structural. The EU AI Act is a directly applicable regulation with a four-tier risk architecture, legally binding obligations for providers and deployers, mandatory conformity assessments for high-risk systems, an EU database for registration, and administrative penalties reaching EUR 35 million or 7 per cent of worldwide annual turnover. Japan's Act creates no equivalent penalty mechanism at the company level. The Act directs the government to create a supportive policy environment; it does not direct companies to comply with specific requirements under threat of sanction.

This does not mean Japan is ungoverned. The FSA, MHLW, and METI exercise supervisory authority through existing regulatory frameworks. A financial institution operating in Japan that deploys an AI agent in violation of FSA guidance faces the full range of supervisory tools available to the FSA under the Financial Instruments and Exchange Act and the Banking Act. Those tools include business improvement orders, licence suspension, and criminal referral. The difference from the EU model is that the AI-specific obligations are embedded in sector regulators' existing authority rather than in a standalone AI regulation.

For an operator building a cross-jurisdictional AI governance programme, the practical implication is that EU AI Act compliance provides a strong foundation for Japan alignment. An organisation that has built the Article 9 risk management system, the Article 26(2) oversight register, and the Article 17 technical documentation required under the EU AI Act has documented its AI governance against a standard that substantially covers Japan's seven principles. The additional work for Japan compliance is typically limited to mapping that documentation to the METI Business Guidelines structure and ensuring that APPI requirements for any Japanese-resident personal data are addressed.

OECD AI Principles and Japan's international commitments

Japan was a founding signatory of the OECD AI Principles, adopted in May 2019 and revised in November 2024. The 2024 revision updates the principles to address generative AI, agentic systems, and frontier model risks. Japan's AI Promotion Act is designed to be consistent with the revised OECD Principles, and the Cabinet Office's Basic Plan references them directly. For operators building an international compliance programme, the OECD AI Principles provide a useful common framework that is recognised in Japan, the EU (through the EU AI Act's recitals), the United States (through Executive Order 14110 and OMB M-24-10), and the Council of Europe Framework Convention on AI adopted in September 2024.

Japan has also ratified the Council of Europe Framework Convention on AI and Human Rights, Democracy and the Rule of Law (the Framework Convention), which opened for signature in September 2024. The Framework Convention is the first binding international treaty on AI governance. Its obligations fall primarily on state parties in their public-sector use of AI, but its principles of transparency, accountability, and non-discrimination inform how Japan's courts will eventually interpret operator liability in AI-related disputes.

Practical steps for global operators

An operator deploying AI agents in Japan or processing data belonging to Japanese residents should take four practical steps.

First, conduct a Japan-specific scope review. Map each AI agent deployment against Japan's seven Act principles and the METI Business Guidelines. For any deployments in financial services, healthcare, or regulated sectors, cross-reference the relevant ministerial guidance. Identify gaps between current governance documentation and the expected standard.

Second, address the APPI layer. Review data flows involving personal information of Japanese residents. Confirm that purpose statements, accuracy measures, and third-party transfer agreements are in place. The PPC's 2023 AI guidance is the reference document for how APPI applies to AI-generated outputs.

Third, establish Japan AISI monitoring. Subscribe to AISI evaluation publications and METI guidelines updates. The Japanese regulatory environment is evolving quickly, and the gap between the current soft-touch Act and future binding measures may close faster than many operators expect if AI incidents with significant consumer harm occur in the Japanese market.

Fourth, document the cross-map to ISO/IEC 42001. If the organisation holds or is pursuing ISO 42001 certification, document how each Japan governance requirement maps to the management system standard. ISO 42001 certification is increasingly recognised by Japanese enterprise clients and financial regulators as evidence of a mature AI management programme.

For the cross-jurisdictional comparison covering the EU, US, and UK, see three approaches to AI liability in 2026. For the APAC governance landscape more broadly, see Asia-Pacific AI governance 2026. For how the NIST AI RMF intersects with Japan's governance expectations, see NIST AI RMF and the emerging standard of reasonable care.

Frequently asked questions

Does Japan's AI Promotion Act impose direct compliance obligations on private companies?

No. The Act is a framework law that directs government policy. Binding obligations for private operators come from sector-specific guidelines issued by METI, the FSA, and the MHLW, and from the APPI framework administered by the Personal Information Protection Commission.

What is Japan AISI and why does it matter for operators?

Japan's AI Safety Institute was established in February 2024 under AIST/METI. It evaluates AI models for safety properties and participates in international AI safety standard-setting. Its evaluation reports feed into METI operational guidance and inform procurement requirements.

How does the Japan framework compare with the EU AI Act for operators?

The EU AI Act (Regulation 2024/1689) is a binding regulation with risk tiers, conformity assessments, and penalties up to EUR 35 million. Japan's Act is a principles-based framework law. Operators subject to both regimes should build to the EU standard and document the mapping to Japan's seven principles for METI due diligence purposes.

Does Japan's Personal Information Protection Act cover AI agent outputs?

Yes. The PPC's 2023 AI guidance confirms that AI-driven processing of personal information, including agent-generated outputs, is subject to APPI's principles of purpose limitation, accuracy, and security management. This applies to any organisation handling personal information of Japanese residents, regardless of where the organisation is established.

References

  1. Basic Act on the Promotion of Development and Utilization of Artificial Intelligence Technology, enacted by the Japanese Diet, 24 May 2024.
  2. Cabinet Office of Japan. Social Principles of Human-Centric AI. April 2019.
  3. Ministry of Economy, Trade and Industry (METI). AI Business Guidelines (Generative AI Edition). April 2024.
  4. METI. AI Governance Guidelines for Implementation of AI Principles, version 1.1. July 2022.
  5. Japan AI Safety Institute (AISI), established under AIST, February 2024. See meti.go.jp/english.
  6. Personal Information Protection Commission (PPC). Guidelines on AI and Personal Information. 2023.
  7. Act on Protection of Personal Information (APPI), Act No. 57 of 2003, as amended 2022.
  8. OECD AI Principles, adopted May 2019, revised November 2024. OECD/LEGAL/0449.
  9. Council of Europe Framework Convention on AI and Human Rights, Democracy and the Rule of Law, adopted September 2024. CETS No. 225.
  10. Financial Services Agency (FSA). Supervisory guidance on AI governance in financial services. 2024.
  11. ISO/IEC 42001:2023, Artificial intelligence management system.
  12. Regulation (EU) 2024/1689 (EU AI Act), Articles 9, 11, 17, 26. OJ L, 12 July 2024.