AI’s implications for law firms

Arthur Piper, IBA Technology Correspondent Thursday 16 November 2023

Artificial intelligence has come into its own in 2023. Global Insight discusses what this means for the future of the legal profession.

From the perspective of technology watchers, 2023 will probably be regarded as the year that artificial intelligence (AI) finally took off in a significant way. Given that the field of AI was defined as long ago as 1955 by Stanford Professor John McCarthy as ‘the science and engineering of making intelligent machines’, such success has been a long time coming.

The game changer has been algorithms called large language models (LLMs), as made famous by the chatbot ChatGPT (see ‘ChatGPT and the legal profession’, Global Insight April–May 2023). Such programs generate content from massive databases and present it in a way that reads as if written by a human. The sheer scale of information that large language models can process at speed promises to transform professional fields such as law, journalism and any other industry that depends heavily on written communication.

Technology innovation culture

They could also transform a legal culture that has been relatively conservative about how it adopts technology – often with well-founded reservations. Until now, for instance, most partners at law firms have been careful about investing in large-scale technology projects. Home-grown initiatives are generally expensive and hard to make work and it’s difficult to predict whether there will be a decent return on investment. Even implementing off-the-shelf programs is fraught with trauma. A survey by software company Onit’s ContractWorks group in spring 2022 found that poor technology management had even been a factor behind some in-house lawyers leaving their jobs.

Law firms may be able to develop models that have a reasonable chance of predicting the outcome of cases with some accuracy

By comparison, ChatGPT and large language models by Google, Microsoft and other companies have acted as a proof-of-concept that AI can be an effective tool in any information-rich endeavour. They work out of the box and need no special skills or training to get going – although they also tend to occasionally ‘hallucinate’ by providing very convincing-sounding answers that are widely incorrect. While other forms of AI have been embedded in off-the-shelf legal software for decades, the relative ease of deployment means large language models are a radical break with the past. And we have already seen the legal profession act. Allen & Overy’s innovative AI chatbot Harvey, for instance, may have been the first of such new-wave AI initiatives, but it’ll certainly not be the last.

Early results from such real-life applications have been encouraging. Professor Dan Hunter, Executive Dean at the Dickson Poon School of Law at King’s College London, says that evidence is growing from an increasing number of studies that large language models deliver significant boosts to productivity, efficiency and speed. ‘You can average out the gains as between a 30 per cent and 50 per cent increase across a wide range of different areas with the biggest improvement for those people who are least skilled, such as paralegals’, he says. ‘Even without instruction and training everyone can become above average, faster and more efficient.’

These findings are consistent with recent studies in most research-intensive fields. For example, academics at Willamette University in Oregon found that while ChatGPT was unreliable for generating research reviews for undergraduates on its own, with a ‘human in the loop’ and well-targeted questions the potential was transformative for research and scholarship in general. ‘The capacity to examine and condense substantial quantities of text could transform the procedure of literature review, enhancing its efficiency and comprehensiveness rapidly and precisely’, said the academics.

Hunter is convinced that large language models, when used properly, can work extremely well as an enhancement to legal practice – a kind of cognitive prosthetic that adds speed and efficiency to almost any legal process. Given that the legal profession deals with high quantities of structured documents, such as contracts and advice on relatively straightforward legal issues, large language models can support more junior staff to provide such services quickly and accurately. Broadly speaking, that can include legal research, contract and legal document analysis, as well as proofreading, error correction and document organisation, according to a research paper by the American Bar Association. Further down the line, law firms may be able to develop models that have a reasonable chance of predicting the outcome of cases with some accuracy.

Challenger firms and challenges for firms

Hunter, who is currently working with the IBA on the intersection between technology and law, has been active in this field for about 30 years. Even so, he has been surprised by how good large language models are and says that trying to understand the potential consequences of the many rapid developments in the field has become hugely challenging simply because of the pace and scale of change.

But he expects that once such programs become widely adopted, they will have serious implications for both the workforce and for law firms. ‘Straightforward legal work won’t be done by [large language models] but by people using them efficiently and effectively’, he says. That could see the creation of firms with, say, three highly experienced partners supported by around 50 paralegals doing a significant amount of work. Niche firms targeting specific segments of the legal market by offering rapid, cheap services could also appear, making it difficult for traditional working practices to compete in those areas. In more transactional legal areas, Hunter predicts that apps will start appearing on mainstream commercial stores on an experimental basis – perhaps in as little as two or three months from now.

Challenger firms may spring up just as they have in the banking sector, to attract more tech-savvy clients and mop up routine, low-risk work

Later down the line, challenger firms may spring up, just as they have in the banking sector, to attract more tech-savvy clients and mop up routine, low-risk work. ‘One question is whether there is a static amount of billable work in the legal sector’, Hunter says. If so, there could be some hollowing out of traditional firms if they cannot reduce their cost base compared with leaner competitors.

But Hunter also cautioned firms against ignoring the potential downsides, including the potential reputational damage caused by undetected errors. Replacing trained lawyers directly with juniors armed with large language model programs would currently be unwise and there are legitimate concerns over privacy, client confidentiality and accuracy.

In May, for instance, Steven A Schwartz, a highly experienced attorney, told a US Court that in preparing research for a colleague he had used ChatGPT to look for previous similar cases to the one brought by a client plaintiff. The judge said that the Court was faced with an ‘unprecedented circumstance’ when the defendant’s legal team said they were unable to find several cases mentioned in the brief. Schwartz said he had been unaware that the chatbot’s content could be false.

Because of such slips and the potential for discriminatory behaviour and ethical concerns, Hunter says he expects most innovation over the coming year or two to focus on areas where firms can use large language models to provide legal material at scale. ‘Large language models are fast at producing the relatively petty stuff that we as lawyers produce 90 per cent of the time’, he says, half-jokingly, ‘because as much as we believe we are always involved in big thinking much of our life is taken up with basic documents’.

Arthur Piper is a freelance journalist. He can be contacted at arthur@sdw.co.uk

Image credit: Ar_TH/AdobeStock.com