New global norms on responsible AI beyond the EU: the G7 Hiroshima Process international guiding principles for developing advanced AI systems and its application in Japan
Shinwa Law, Tokyo
The importance of responsible AI rules from a business and human rights perspective
We are experiencing rapid progress in technological innovation driven by artificial intelligence (AI), with generative AI or general purpose AI at the forefront. While AI can contribute to promoting economic development and solving societal issues, AI may also cause risks to society, including by having a negative impact on various types of human rights.
The potential for bias in AI algorithms may increase discrimination against socially vulnerable people, for example. AI may also infringe privacy rights and copyright. There are also concerns that AI might put workers and consumers in vulnerable positions. Notably, the misuse of AI-based surveillance technology has the potential to restrict the political freedom of citizens. The proliferation of disinformation amplified by AI also poses the risk of exacerbating conflicts and violence, particularly in conflict-affected regions.
Addressing the abovementioned risks posed by AI is crucial from the viewpoints of business and human rights (BHR) and responsible business conduct (RBC). In fact, in light of the growing concerns regarding AI risks, there has been swift progress in establishing responsible AI rules across certain jurisdictions. These rules are often built with reference to international BHR and RBC standards, such as the UN Guiding Principles on BHRs (UNGPs) and the Organisation for Economic Cooperation and Development (OECD) Guidelines for Multinational Enterprises on RBC.
The European Union at the forefront of regulation
The EU has adopted a regulatory approach to ensure the responsible use of AI and to address the risks posed by AI to fundamental rights. In December 2023, the trilogue negotiations between the European Commission, the European Parliament and the Council of the EU led to an agreement on harmonised rules on AI, in the form of the EU AI Act.
According to the press releases issued by the European Parliament and the Council of the EU, the AI Act will ban certain applications of AI that pose potential threats to citizens’ rights and democracy. In addition, the Act will impose the obligation on those who deploy high-risk AI systems to conduct a fundamental rights impact assessment prior to rolling out an AI system.
Regarding general purpose AI, the Act will impose transparency obligations, including the requirement to draw up technical documentation, comply with EU copyright laws and disseminate detailed summaries about the content used for training the AI. More stringent obligations will apply to high-impact models.
The G7’s international guiding principles and the code of conduct for advanced AI systems
Many countries are reluctant to adopt strict AI regulations similar to the EU due to concerns that such regulations might impede innovation. Through the rapid development of advanced AI systems, such as generative AI, however, the need for ensuring responsible AI has increased. In October 2023, the leaders of the Group of Seven (G7), chaired by Japan, agreed on the Hiroshima AI Process Comprehensive Policy Framework.
The Framework includes the adoption of the International Guiding Principles on AI and the International Code of Conduct for AI developers, based on the OECD’s report Towards a G7 Common Understanding on Generative AI.
The Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems consists of the following 11 principles that apply to all AI actors involved in the design, development, deployment, provision and use of advanced AI systems:
1. take appropriate measures to identify, evaluate and mitigate risks prior to and throughout deployment;
2. identify and mitigate vulnerabilities, incidents and patterns of misuse after deployment;
3. publicly report the capabilities, limitations and domains of appropriate and inappropriate use for transparency and accountability;
4. work towards responsible information sharing and reporting of incidents;
5. develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach;
6. invest in and implement robust security controls;
7. develop and deploy reliable content authentication and provenance mechanisms;
8. prioritise research to mitigate risks and investment in effective mitigation measures;
9. prioritise addressing the world’s challenges such as the climate crisis, global health and education;
10. advance the development and adoption of international technical standards; and
11. implement data input measures and protections for personal data and intellectual property.
The International Code of Conduct for Organizations Developing Advanced AI Systems provides more concrete guidance for organisations developing the most advanced AI systems, in line with the 11 points set out in the Guiding Principles described above.
Notably, both the preamble to the Guiding Principles and the Code of Conduct explicitly state that private sector activities should be in line with international BHRs and RBC frameworks, such as the UNGPs and the OECD Guidelines. This means that companies are expected to conduct human rights due diligence and establish effective grievance mechanisms for addressing the risks posed by AI to society, especially to human rights, as required by the UNGPs and the OECD Guidelines. The OECD Guidelines, revised in June 2023, have also clarified its recommendation for businesses to carry out risk-based due diligence concerning the development, financing, sale, licensing, trade and use of technology, such as AI.
Japan: publication of draft AI guidelines
Based on the G7 Hiroshima Process AI documents, Japan released draft AI Guidelines in December 2023. The government plans to finalise and publish the guidelines by the end of March 2024 for public consultation.
Japan’s draft guidelines are set out in five chapters. Chapter one specifies the definitions related to AI. Chapter two presents common principles and guidelines applicable to all the business involved in activities related to AI. Chapters three to five provide detailed guidelines specifically for AI developers, AI providers and AI users.
The common guidelines set out in chapter two are further divided into two parts: one part for all AI systems and one specifically for advanced AI systems, such as generative AI.
The guidelines for all AI systems are comprised of ten principles: (1) human centric; (2) safety; (3) fairness; (4) privacy protection; (5) ensuring security; (6) transparency; (7) accountability; (8) education/literacy; (9) fair competition; and (10) innovation. These principles have been developed in accordance with the OECD’s AI Principles and Japan’s Social Principles of Human-Centric AI, adopted in 2019.
The human-centric principle under the draft Guidelines emphasises that business operators should make sure that they do not violate human rights guaranteed by Japan’s Constitution or internationally recognised rights during the development, provision and use of AI systems. It is important to note that the Japanese government also published its Guidelines on Respecting Human Rights in Responsible Supply Chains in 2022 and as such encourages companies to respect internationally recognised human rights.
The guidelines for advanced AI systems encourage business to comply with the G7’s International Guiding Principles and the Code of Conduct explained above.
What are the implications for businesses?
Due to the continuous rapid development of AI technologies, norms and regulatory environments on responsible AI are likely to continue evolving. Businesses engaged in developing, providing and using AI systems need to comply with the applicable norms, such as the EU AI Act, the G7 International Guiding Principles and the Japanese guidelines, while remaining attentive to upcoming regulatory developments.
Businesses should also carry out risk-based human rights due diligence and establish effective grievance mechanisms in accordance with the UNGPs and the OECD Guidelines. Through these measures, companies can effectively address stakeholder concerns regarding the risks posed by AI to society and human rights. This approach can boost accountability and transparency without impeding innovation, particularly in unpredictable regulatory landscapes.
 European Parliament, Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI (9 December 2023), which is accessible at: https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai