ChatGPT and the legal profession

Arthur Piper, IBA Technology CorrespondentThursday 23 March 2023

Global Insight considers the implications as AI chatbots increasingly become part of the lawyer’s toolkit – and even enter the courtroom.

Journalists love a good gimmick. So, when a flurry of articles hit the press recently about the latest artificial intelligence (AI) chatbot – ChatGPT – many couldn’t resist the temptation to fool their readers. The program answers questions by generating text responses that mimic natural language. Predictably, journalists had the program create their opening paragraphs, followed by jokey disclaimers.

ChatGPT’s creator OpenAI – a research and programming company – has currently made the chatbot accessible on a free-trial basis. ChatGPT uses a so-called large language model (LLM) based on its parent program, InstructGPT. This AI application is trained by humans and algorithms, and designed to understand what people want in a ‘more truthful and less toxic’ way than previous bots.

An AI system equipped with up-to-date and accurate case law and regulatory data could be a game-changer for some firms

Because the results generated by ChatGPT are so impressive, a wide range of concerns have been put forward in reaction, centred around the chatbot’s potential to displace humans in broad areas of economic activity. In other words, the robots are coming for our jobs. Whether those fears will turn out to be valid remains to be seen, but it’s worth noting that all of the major tech players and numerous start-ups have, or are developing, similar AI assistants. Google Bard, for example, which is open to a few ‘trusted testers’, uses internet data to achieve similar results. Microsoft has revamped its Bing search engine to include its new Prometheus model at the same time as the business is planning to invest $10bn into OpenAI, ChatGPT’s parent.

Digital technologies are already widely used in the legal profession, but two recent court cases have broken new conceptual ground by introducing them into the courtroom. In February, Judge Juan Manuel Padilla Garcia – who presides over the First Circuit Court in Cartagena, Colombia – said he used AI to help him reach a legal decision in a recent hearing. The judge posed ChatGPT four questions, including one that asked whether minors diagnosed with autism are exempt from paying for therapies. The program said they were, advising: ‘Yes, this is correct. According to the regulations in Colombia, minors diagnosed with autism are exempt from paying for their therapies’.

AI’s day in court

Padilla Garcia’s decision to deploy such technology simply put into action a Colombian law passed in 2022 that urges lawyers to use technologies that make their work more efficient. ‘The purpose of including these texts produced by the AI is in no way to replace the decision of the judge’, said Padilla Garcia. ‘What we really seek is to optimise the time spent in drafting sentences, after corroborating the information provided by AI.’ He likened the work done by ChatGPT to that of a legal secretary.

In fact, the potential for the use of AI in the judicial system is much broader than that. For instance, DoNotPay is a legal company that utilises AI in the US to challenge parking tickets and to bring mostly consumer-related legal cases against companies via an app. The chatbot asks users a series of questions and can create the necessary legal documentation that they need to become a plaintiff. It also generates a script for customers to use during their day in court. In many ways, the app aims to bring justice within reach of those who cannot or will not pay for professional representation.

But in January 2023, it dropped out of a plan to allow a customer to use the app during a live court case over a speeding ticket dispute. The client would have been able to use the same kind of LLMs that power ChatGPT to ask for legal advice. The answers would have come back through a pair of wireless headphones in real time during the hearing. Apparently, the court wouldn’t have been able to tell that the defendant was using the device to seek help – but that’s beside the point. The defendant would have had AI-powered legal representation on the sly. Although DoNotPay ultimately wasn’t used in the case, it’s possible that AI chatbots will provide courtrooms with advice in the not-too-distant future.

LLM-backed AI has already reached the London legal scene. In February, Allen & Overy announced it had introduced an AI chatbot called Harvey to help its lawyers draft contracts for such tasks as preparing documents for mergers and acquisitions. The firm said that the program needed to be supervised by a licensed legal professional, not least because Harvey ‘hallucinated’ – in other words, it sometimes produced inaccurate or misleading results. It’s not hard to imagine that dedicated LLMs trained on legal documents and programmed by legal professionals are not that far away. An AI system equipped with up-to-date and accurate case law and regulatory data could be a game-changer for some firms.

Angelo Anglani, a Commissioner for the IBA Future of Legal Services Commission and an equity partner at ADVANT Nctm in Rome, says that the usefulness of AI tools for a large number of applications in the legal world is clear, but, at the same time, it is essential that such tools are verifiable at all times. For example, it should at least be verifiable as to what and how information has been provided to the AI system, he says, as well as what type of processing has been performed and that it has been used under continuous human supervision.

‘Bias, the potential for plagiarism, the possibility of even unintentional inaccuracy – due to incomplete information entered into, or instructions given to, the system – are ever-present risks’, he says, noting that the impact could even be amplified by the AI tool. ‘Also for this reason, without control in some way over the processes being maintained, overconfidence in the AI system could cause more damage than the anticipated benefits.’

Even so, since AI tools exist today, applying such programs to legal drudgery under human supervision makes sense. Where speed is of the essence, for instance in carrying out due diligence processes for mergers, AI promises to lighten the load.

The imitation game

But Anglani is also open to the development of predictive analytics to support legal decision making. A well-designed legal AI system, for example, may be able to provide statistical probabilities for the outcome of corporate or tax law court decisions based on current trends. Directors and their advisers would then be in a position to feed those results into their reasoned deliberations. ‘These are positive developments’, he says. ‘But we need to be cautious about removing humans altogether from the decision-making process. AI is a tool and must be considered as such.’ This is partly out of consideration for the evolution of the law. If that were left to algorithms and to a machine capable of autonomously developing reasoning – the new and evolving frontier of AI tools – it could become unpredictable and unwelcome.

It seems as though LLM-fuelled AI has helped technology to cross the boundary where one may not be able to distinguish very readily between what an algorithm can tell us and what your average human might say on a particular issue. This is what the mathematician and computer scientist Alan Turing called the ‘imitation game’ – now known as the Turing Test. The question now is, what kind of human does AI simulate and how can lawyers best learn to live with it? While AI can rapidly collect data from a wide range of sources and collate it in a way that’s easily accessible, it can also suffer from ‘hallucinations’, which makes it prone to bias and error.

It's a question that’ll take years to answer. But ChatGPT suggested this useful starter for our column: ‘Legal professionals should be aware of the limitations and potential biases of AI technologies, and should use them in conjunction with human expertise and judgment’.

Arthur Piper is a freelance journalist. He can be contacted at arthur@sdw.co.uk

Image credit: Sutthiphong/AdobeStock.com