Digital strangers in litigation: does sharing with AI breach privilege?

Wednesday 29 October 2025

Amna Goraya

TransPerfect Legal, London

amna.goraya@transperfect.com

Al-Karim Makhani

TransPerfect Legal, London

amakhani@transperfect.com

Stefan Nigam

TransPerfect Legal, London

stefan.nigam@transperfect.com

Introduction

Artificial intelligence (AI) is transforming the way lawyers work. For litigators, the attraction is clear – faster review, sharper analysis and definitive cost saving. Yet with these opportunities come a sharper risk. A risk that, by entrusting client information to an AI ‘black box’ whose processes lack clear transparency, lawyers in England and Wales may unintentionally breach their most fundamental duties of confidentiality and privilege.

The challenge then is not whether AI will be used in litigation, but how. Public platforms in particular, invite concern. They are not regulated by legal professional bodies, nor bound by any contractual obligations to the client or user. Information uploaded to them is effectively shared with an unknown entity: the AI platform itself, a ‘digital stranger’ in the lawyer–client relationship. Disclosing information to such systems may be no different from disclosure to an unregulated outsider and so risks being treated as a waiver. Private systems appear safer, but the question remains whether they can or should benefit from an extension of the various rules of privilege.

This article considers various issues in turn:

  • the ethical principles that frame this debate;
  • risks of public and private systems;
  • guidance issued by professional bodies; and
  • practical steps firms can take to embrace innovation without compromising client trust.

Ethics in practice

Fairness and integrity have always been cornerstones of the legal profession. At the core of this duty lies the protection of client interests and the information they share. In English law, that protection is upheld through the fundamental right of legal professional privilege (LPP)

LPP protects two branches of lawyer–client communications. Legal advice privilege applies to all communications for the purpose of giving or receiving legal advice when litigation is not in progress or contemplated. Litigation privilege, which this article is concerned with, is much broader. This covers communications between the client, their lawyer and third parties where the dominant purpose is litigation in progress, pending, or reasonably in prospect.

To claim litigation privilege, the communication must in question must be confidential. The common law duty of confidentiality requires that client information be kept confidential unless disclosure is required or permitted by law or client consent. Put together, both privilege and confidentiality demand that sensitive information remain within the lawyer–client relationship. Once that boundary is breached, especially through exposure to a ‘black box’ or digital stranger, the sanctity of privilege begins to collapse.

This challenge is not new. Lawyers have always had to weigh the risks of sharing information with third parties, from interpreters to expert witnesses. The law recognises extensions only where communications are made for the dominant purpose of litigation.[1]

Confidentiality, by contrast, is broader still. The ethical duty prohibits disclosure unless expressly authorised by the client or required by law. Inadvertent sharing can undermine trust and trigger liability, unless the lawyer can discharge the ‘heavy burden’ of demonstrating that there was no real risk of confidential information being unwittingly or inadvertently disclosed.[2]

Framing AI as the latest ‘third party’ makes clear that this is not a new puzzle, but a modern version of an old dilemma. However, can information entrusted to complex systems ever be reconciled with the legal profession’s commitment to client confidentiality?

AI as a third party

In litigation, sharing information to a third party can be enough to destroy privilege, unless the communication falls within the ‘dominant purpose’ exception for litigation. This includes ‘agents’ such as eDiscovery consultants and experts instructed alongside lawyers. Public AI platforms, such as Gemini and Claude, cannot fall into the ‘third party’ exception as the information loses confidentiality the moment it is uploaded – as confirmed by their own terms and conditions. Although such systems offer an opt-out clause to limit data retention and retraining of AI systems, it is the lack of an enterprise contractual relationship – which typically addresses these concerns – that ultimately harms client interests.

Accordingly, litigation stakes are high. Protection rests on whether there exists a clear, confidential framework for disclosure. Public AI systems offer no such assurances. Disclosure to an AI provider with no role in proceedings could readily be treated as a waiver, exposing sensitive litigation strategy, or worse, damaging documents. The interesting question now is whether private or firm-built AI platforms should be treated the same way.

Public vs private AI systems

In litigation, public AI systems carry the obvious risks highlighted above. In contrast, private or firm-built AI systems appear to offer a safer path.

Firms are pursuing two distinct models to keep data in-house – building proprietary tools or partnering with established providers. Hogan Lovells is an example of the former with ELTEMATE CRAIG, a firm-built generative AI suite originally designed for litigation support tasks. Developed and governed entirely in-house, ELTEMATE is deployed securely within the firm’s framework and has delivered measurable efficiency gains. In contrast, A&O Shearman demonstrates the partner route through its enterprise adoption of Harvey and the co-developed ContractMatrix platform for contract drafting, review and negotiation. These externally built systems are integrated into the firm’s workflows under a licence, with A&O Shearman now expanding their use into agentic AI for more complex tasks.[3]

The main differentiators between these private and public platforms are a strict contractual relationship ensuring confidentiality is preserved, data is segregated, and information is not retained longer than necessary. Since they function under direct policies and supervision, the system effectively acts on the firm’s behalf rather than independently. From this lens, a closed system could operate more like an expert witness or interpreter, qualifying as an ‘agent’ of the firm rather than a third party.

Yet uncertainty persists. Even internal setups must strictly segregate workflows and data, and operate within secured, sandboxed environments to prevent accidental breaches. Confidentiality may still collapse even in anonymised datasets as large models can detect patterns that re-identify sensitive information.[4] These concerns lead directly into the regulatory perspective – how can professional bodies expect lawyers to balance innovation with their core duties of confidentiality and privilege?

Regulatory perspective

Regulators have been quick to underline that the duty to protect client privilege and confidentiality applies regardless of whether AI is public or private. The Bar Council warns barristers to be ‘extremely vigilant’ when inputting client data into public AI systems, warning that such disclosures could expose counsel to disciplinary action and/or legal liability.[5] The Information Commissioner’s Office has also reminded firms that client and personal data remain protected under the UK General Data Protection Regulation, publishing eight criteria for developers and users of private or public AI to test whether their use is necessary, proportionate and secure.[6]

Guidance on the use of private AI tools specifically focuses on ensuring appropriate checks are in place to safeguard privilege and confidentiality. Earlier this year, the Solicitors Regulation Authority authorised the first AI-powered law firm, Garfield.Law.[7] The litigation assistant met the authority’s strict criterion on quality-check work and, most importantly, the ability to maintain client confidentiality. However, oversight and supervision are mandated with liability resting with named solicitors for outputs of the system.

Hence, the onus remains on lawyers to do their due diligence before using these platforms. As stressed by Sir Geoffrey Vos, without a solid grasp of an AI model’s capabilities and limitations, legal professionals cannot effectively advise clients on emerging AI liability issues, nor can they harness the efficiency and cost saving that AI promises.[8] What remains is to identify the practical steps that can reconcile the push for innovation while keeping client interests at the forefront.

Risk mitigation

Responsible and ethical engagement with digital tools comes down to two things – the user and the tool itself. Public AI models must not be used for confidential or privileged materials due to their role existing outside the lawyer–client relationship. Proprietary or enterprise-licensed models, on the other hand, offer closed, contractually governed environments with built-in security.

Policies and technical mechanisms designed to safeguard ethical and legal boundaries, also called guardrails, are essential. Firms must implement accountability frameworks, data governance protocols, regular model testing and structured training for employees to understand limitations.[9] Independent verification of results is imperative to reduce the risk of errors and fabricated outputs, or hallucinations.[10]

Privilege is more complex. While confidentiality applies broadly, privilege can be lost the moment material is shared with an outsider who does not qualify as an agent for the litigation. To prevent a loss of privilege, engagement terms must explicitly treat AI-generated work as part of the firm’s work product.[11] Clear record must be kept of how and when privileged material is processed while responsibility for human oversight and monitoring the regulatory landscape should be expressly allocated.[12]

It is these contractual safeguards and technical guardrails put together that support a justification of why and how AI use can remain necessary, proportionate and consistent with the duty to preserve privilege and confidentiality.

Conclusion

Regulators and judges have been clear – the responsibility does not shift with the technology: it stays with the lawyer. For instance, there have been several instances of fake citations and inaccurate summaries presented before the courts.[13] In such matters, judges have focused their criticism on the competence and conduct of the lawyers involved than on the technology itself.[14] Human oversight, therefore, remains vital.

Regarding the platform itself, public tools act like ‘digital strangers’ and carry obvious risks of waiver. They exist on their own terms with no assurances of personal data being kept private. In fact, the CEO of the world’s most popular AI tool, with nearly 6 billion monthly visits as of August 2025,[15] stated that conversations on his platform are not protected by privilege or confidentiality.[16] If anything, they would legally be required to produce those communications during a lawsuit.

Private AI systems, in contrast, offer higher protection of confidentiality. Designed with secure infrastructure and governed by firm policies, they are a more defensible framework. They offer sufficient protections against inadvertent breaches and protect legal professionals against breaches of privilege and confidentiality. Ultimately, private AI systems offer the only credible path to integrating innovation in litigation while preserving the fundamental duties of privilege and confidentiality on which the profession rests.


[1] Ahuja Investments Ltd v Victorygame Ltd and others [2021] EWHC 1543 (Ch)

[2] Prince Jefri Bolkiah v KPMG [1999] 2 A.C. 222

[3] David Wakeling, James Webber and Filippo Crosara, ‘A&O Shearman and Harvey to roll out agentic AI agents targeting complex legal workflows’ (A&O Shearman, 6 April 2025), see www.aoshearman.com/en/news/ao-shearman-and-harvey-to-roll-out-agentic-ai-agents-targeting-complex-legal-workflows, accessed 15 September 2025.

[4] Jeff Johnson, ‘Best Practices for Confidentiality in the Age of AI-Powered Legal Tools’ (JD Supra, 8 September 2025), see www.jdsupra.com/legalnews/best-practices-for-confidentiality-in-1379976/?utm_source=chatgpt.com, accessed 7 September 2025.

[5] The Information Technology Panel, ‘Considerations when using ChatGPT and generative artificial intelligence software based on large language models’ (The Bar Council, 30 January 2024), see www.barcouncilethics.co.uk/wp-content/uploads/2024/01/Considerations-when-using-ChatGPT-and-Generative-AI-Software-based-on-large-language-models-January-2024.pdf, accessed 7 September 2025.

[6] Stephen Almond, ‘Generative AI: eight questions that developers and users need to ask’ (Information Commissioners Office, 3 April 2023), see https://ico.org.uk/about-the-ico/media-centre/blog-generative-ai-eight-questions-that-developers-and-users-need-to-ask/, accessed 7 September 2025.

[7] ‘SRA approves first AI-driven law firm’ (Solicitors Regulation Authority, 6 May 2025), see www.sra.org.uk/news/news/press/garfield-ai-authorised/?utm_source=chatgpt.com, accessed 10 September 2025.

[8] Sir Geoffrey Vos, ‘Speech by the Master of the Rolls at the LawtechUK Generative AI Event’ (Courts and Tribunal Judiciary, 5 February 2025), see www.judiciary.uk/speech-by-the-master-of-the-rolls-at-the-lawtechuk-generative-ai-event/, accessed 7 September 2025.

[9] ‘Voluntary AI Safety Standard: The 10 guardrails’ (Australian Government: Department of Industry, Science and Resources, 5 September 2024), see www.industry.gov.au/publications/voluntary-ai-safety-standard/10-guardrails, accessed 12 September 2025.

[10] ‘What are AI guardrails?’ (McKinsey & Company, 14 November 2024), see www.mckinsey.com/featured-insights/mckinsey-explainers/what-are-ai-guardrails, accessed 12 September 2025.

[11] ‘Signature and TransPerfect Legal – AI in Arbitration: Key Takeaways from our panel session at PAW 2025’ (Signature, 14 April 2025), see www.signaturelitigation.com/signature-and-transperfect-legal-ai-in-arbitration-key-takeaways-from-our-panel-session-at-paw-2025/ , accessed 12 September 2025.

[12] Ibid.

[13] Ayinde v Haringey [2025] EWHC 1383 (Admin).

[14] Ibid.

[15] Sara Fischer, ‘ChatGPT is still by far the most popular AI chatbot’ (Axios, 6 September 2025) see www.axios.com/2025/09/06/ai-chatbot-popularity, accessed 10 September 2025.

[16] Jason Snyder, ‘OpenAI: ChatGPT Wants Legal Rights You Need The Right To Be Forgotten’ (Forbes, 27 June 2025), see www.forbes.com/sites/jasonsnyder/2025/07/27/openai-chatgpt-wants-legal-rights-you-need-the-right-to-be-forgotten/, accessed 10 September 2025.