Guidelines on the use of generative artificial intelligence in mediation
These guidelines were prepared by the IBA Mediation Committee, reflecting our commitment to exploring the role of generative artificial intelligence (AI) in mediation practice. The growing use of AI presents an unprecedented opportunity to facilitate mediation by improving efficiency, reducing costs, and broadening access to justice, provided that AI is integrated into mediations with appropriate safeguards.
- part one of these guidelines provides a non-exhaustive list of suggestions for how AI can enhance mediations including uses for mediators, parties, and party representatives and mediation institutions. These suggestions are subject to the safeguards in part two;
- part two of these guidelines identifies risks that may result from the use of AI and makes proposals for managing those risks; and
- part three provides a sample statement that mediation participants can use to communicate that AI tools have been or will be used in a mediation.
Scope of AI definition
In these guidelines, ‘AI’ refers to systems that create content based on user-provided data. This includes generating text, images, or other media; recognising patterns; and providing insights or recommendations. For example, large language models can generate relevant responses to prompts they receive from users. These guidelines do not cover non-generative AI that operates based on fixed rules or algorithms rather than creating new content. Non-generative AI includes search engines and chatbots that use pre-existing data to provide responses.