Technology: UK judge warns lawyers about risks of AI use in court

In a ruling delivered in June, a senior UK judge warned lawyers they could face criminal charges if they rely on fictitious AI-generated cases when presenting written arguments in court.
Dame Victoria Sharp, President of the King’s Bench Division of the High Court, noted that while AI is powerful, widely available generative tools such as ChatGPT ‘are not capable of conducting reliable legal research’ and that there are ‘serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused.’
Sitting alongside Justice Jeremy Johnson in the Divisional Court, Sharp was assessing the role of lawyers following the referral of two cases where AI tools were or were suspected to have been used when preparing written arguments later presented in court.
In the Ayinde case, counsel presented five non-existent cases as evidence. Counsel denied generative AI was used to compile the list of cases and said she made general internet searches, but couldn’t, however, later identify any sources for the fake cases online. In the second case referred to the Divisional Court, Al-Haroun, the claimant’s solicitor submitted a witness statement containing numerous authorities to support their position, of which 18 were found not to exist, while some others weren’t accurately applicable to the arguments or were misquoted. The solicitor had relied on research undertaken by his client, without independently verifying it.
Just because judges appear to have been lenient so far does not mean that they will be in future, particularly if more and more cases come to light
Melissa Stock
Member, IBA Business Law International Editorial Board
In the Ayinde case, the judge had awarded a wasted costs order for the claimant’s legal representatives and counsel each to pay £2,000 to the defendant. The counsel and solicitor were also referred to the Solicitors Regulation Authority (SRA). Sharp referred the barrister to the Bar Standards Board (BSB). No contempt proceedings were initiated, however. In Al-Haroun, the solicitor referred himself to the SRA, as did the Divisional Court. Sharp concluded the threshold for contempt hadn’t been met.
There’s no shortage of professional guidance available about the limitations of AI and the risks. The Bar Council published guidance in early 2024 entitled Considerations when using ChatGPT and generative artificial intelligence software based on large language models, which warns that AI-generated content that misleads the court – however inadvertent – would still be classed as incompetent and grossly negligent and could risk bringing the profession into disrepute, as well as result in disciplinary and legal proceedings. Knowingly presenting false material could be regarded as contempt of court or as perverting the course of justice.
Similar warnings are contained in the SRA’s Risk Outlook report: The use of artificial intelligence in the legal market, published in 2023, as well as in a blog post written by the BSB on the theme of ChatGPT in the courts.
Guidance on AI, published by Courts and Tribunals Judiciary and last updated in April, states that ‘all legal representatives are responsible for the material they put before the court/tribunal and have a professional obligation to ensure it is accurate and appropriate’ and warns lawyers that ‘AI tools are a poor way of conducting research to find new information you cannot verify independently.’
‘While AI offers many potential opportunities in terms of how lawyers work – and its use is to be cautiously encouraged – as in all areas of business it’s important to understand and manage the emerging risks,’ says Fergal Cathie, Co-Chair of the IBA Regulation of Lawyers Committee. ‘AI should not compromise the standards clients expect in terms of work quality and confidentiality.’
The issue of so-called ‘hallucinations’ – fictions generated by AI – isn’t limited to the UK. For example, in Mata v Avianca Inc in 2023 lawyers produced material before the Court that had been generated by ChatGPT. The opposing lawyer and the Court were unable to find a number of the cited cases. The year after the case, the American Bar Association issued an ethics opinion on the responsibilities of lawyers using generative AI.
Chris Howard, Co-Vice Chair of the IBA Future of Legal Services Commission, says that while it’s in the best interest of the client to use AI to ensure a cost-effective and swift outcome, ‘the many emerging examples of malpractice such as hallucinated case citations illustrate the accompanying dangers.’ The solution, he says, ‘is ringfenced AI research tools, providing reliable, secure functionality, combined with strong human oversight and lawyers maintaining a full understanding of basic legal research methods. The cost of such provision will be an issue, however.’
Having experienced, senior lawyers ‘in the loop’ to check any work that has used AI is vital as they are more likely to spot anomalies and mistakes, including materials provided by the other side, says Paul Marmor, Co-Chair of the IBA Law Firm Management Committee. But he adds it’s also sensible to limit – at least initially – some of the tasks AI can be used for by very junior members of the team in a litigation context. For example, they might produce minutes of meetings and notes rather than assist with case files. Further, AI-generated outputs should be considered as ‘suggestions only’, he adds.
Marmor, who’s Head of Litigation at law firm Sherrards, also believes law firms should conduct training to make all employees aware of the risks associated with poor AI use. This is particularly the case for public AI, where law firms are at risk of committing a data breach if they input information regarding a case into a tool such as ChatGPT which then makes the data publicly available.
Melissa Stock, Member of the IBA Business Law International Editorial Board, says there are several lessons to be learnt from such cases. Firstly, she says, ‘just because judges appear to have been lenient so far does not mean that they will be in future, particularly if more and more cases come to light.’ Secondly, ‘there needs to be a recognition that lawyers are going to continue to use AI despite these failings.’
Stock, who’s a barrister at Millennium Chambers in London, says a way forward may be to require lawyers to disclose in their pleadings that they have used AI during their preparations and to show where. This would improve transparency in court, while also enabling lawyers to carry out case work more quickly and efficiently.
‘There needs to be a recognition that AI is going to be increasingly relied upon in the legal profession to perform a wider range of tasks,’ she says. ‘Just because the technology so far is imperfect does not mean people will stop adopting it. If lawyers at least disclose that they’ve used AI tools and show where they have been applied, it would allow more effective examination and save time.’
Image credit: icedmocha/AdobeStock.com