AI in Pakistani courts of law
Sahar Iqbal
Akhund Forbes, Karachi
Abstract
A judge in a Pakistani court recently employed GPT-4, Open AI’s most advanced chatbot, to help render a judgment in a case. This decision sparked widespread debate regarding AI’s capabilities and the possibility of it replacing legal professionals, including judges. This article explores each aspect of the debate, as well as discussing the potential shortcomings and detriments of AI in a court of law.
The judgment
The case of Muhammad Iqbal v Zayad in the Sessions Court in Phalia, Punjab was a civil suit brought by the plaintiffs over a petrol-pump property dispute. Judge Amir Munir dismissed their appeal for an injunction. The court used GPT-4 to formulate the decision based on existing laws, finding that the chatbot’s suggestions were consistent with Pakistani law, specifically the Code of Civil Procedure, 1908. The judgment includes an explanation of how AI is shaping the future of legal decision-making, citing countries such as the UAE and China which have already used AI in courtrooms. It further reinforces the fact that the chatbot provides a logical explanation of relevant laws and procedures and states that the difference between GPT-4’s and the judge’s answers is ‘only in form and not in substance’. A point to note is that the use of GPT-4 did not influence the judgment and was only a test to explore the use of technology in deciding cases and reducing the burden on courts.
AI’s capabilities, shortcomings, and its role in courts
First and foremost, the information cut-off date for ChatGPT is September 2021 which makes it an unreliable legal assistant as it cannot advise on any laws or amendments preceding the cut-off date.
A more pressing issue is that of Open AI’s political compass. The software is essentially trained by human feedback, which results in it inheriting human-like qualities, specifically the general political biases of its users. While this could be a useful quality for content-writing, there is no place for a political bias in a court of law. Researcher David Rozado gave GPT-4 four political orientation tests, all of which came out as ‘broadly progressive’. Although ‘OpenAI is trying to make their latest GPT model more politically neutral’, Rozado notes, ‘this is a very hard problem to solve’ because much of the training material is itself biased. While AI made available to the public is in its nascent stages of development, it will require strict centric political moderation in the future if it is to be properly used in a court of law.
Unclear standards of accountability are another shortcoming of the use of AI in courts. If a case were to be misjudged based on the AI chatbot’s recommendations, it is unclear on whom the liability will fall: the judge who relied on the AI, the AI system, or the developers?
Lastly, there is currently no statutory framework regarding the use of AI in courts in Pakistan or any other country for that matter. AI cannot be completely integrated in courts unless it is heavily regulated by the law, providing limits and liability procedures.
In Australia, however, a decision from the Supreme Court of Victoria in 2016, McConnell Dowell Constructors (Aust) Pty Ltd v Santam Ltd & Ors (No 1), required the review of a large number of documents which would require over 23,000 hours of manual review. The judge ordered the use of Technology Assisted Review (TAR) and stated that a seemingly impossible manual review would undermine the Code of Civil Procedure. Evidently, AI can be used to perform simple but time-consuming tasks involving legal expertise but it is not capable of formulating a judgement that takes into account not just the law but societal implications, human emotions and nuances.
Will AI replace legal professionals?
The use of AI, as mentioned above, can significantly reduce the burden caused by manual tasks that require more time than skill and judgement. Technology-Assisted Review (TAR), mandated by the Australian Supreme Court, is a ground-breaking provision that could help expedite cases. This is a welcome change, particularly in countries such as Pakistan, where court cases are often delayed due to backlogs, procedural complexities and over-worked and over-burdened personnel. However, it is important to distinguish between legal services that require and do not require human judgement, emotion, societal norms, and a nuanced approach.
Paralegal work, such as drafting contracts and agreements, sorting and reviewing documents, and conducting research and discoveries, can be greatly aided by AI. Nevertheless, it cannot completely replace human expertise due to AI’s current limitations with respect to the knowledge cut-off date. AI can also not replace a judge entirely because it cannot take into account nuances and societal norms when formulating judgments. While AI can be used to judge non-complex issues and minor misdemeanours, more complicated issues such as child custody or the dispensation of punishments for criminal offences require human judgment.
In China, in 2019, a Hangzhou court began using an AI software called Xiao Zhi 3.0, or ‘Little Wisdom’, expediting a trial involving ten people who defaulted on bank loans. Instead of requiring ten separate trials, Xiao Zhi 3.0 facilitated a single hearing which delivered a decision in 30 minutes. Initially employed for repetitive tasks like announcing court procedures, the technology now records testimony, analyses case materials, and verifies information from databases in real-time. Primarily used in simple financial disputes, similar AI has been implemented in Suzhou to settle traffic accident cases, saving judges time by examining evidence and drafting verdicts. Another legal AI platform, Xiao Baogong Intelligent Sentencing Prediction System, is used by judges and prosecutors in criminal law, suggesting penalties based on case information and prior judgments. Despite the potential benefits, experts warn that ethical issues may arise when AI-assisted decisions are considered more credible than human judgments, possibly swaying decision-making due to cognitive biases.
Conclusion
Artificial Intelligence can potentially revolutionise the legal landscape by efficiently streamlining certain tasks. However, there are limitations and ethical concerns associated with its application in courts of law. While AI has demonstrated its usefulness in specific areas, such as document review or handling simple disputes, it cannot replace the nuanced judgment, empathy and understanding of societal norms which only a human judge is capable to possess.
Despite advancements in AI technology, the need for human expertise in complex legal matters remains paramount. Moving forward, it is crucial to establish clear legal frameworks and ethical guidelines to ensure the responsible integration of AI in the judicial system while preserving the irreplaceable value of human judgment.