Authenticating evidence in the age of AI
Tuesday 14 April 2026
Anne Merminod
Torys, Montréal
amerminod@torys.com
Gillian B Dingle
Torys, Toronto
gdingle@torys.com
Lauren Nickerson
Torys, Toronto
lnickerson@torys.com
*With research assistance from Tristan Montag
Litigators around the world are grappling with the transformational influence of artificial intelligence on the practice of law. In some respects, AI offers real promise: increasing efficiency by streamlining routine tasks and surfacing critical information. But it also poses serious risks for legal adjudication.
The first hurdle has been a surge in hallucinated case law. Since 2023, nearly a thousand reported cases worldwide have involved litigants or lawyers submitting incorrect or non-existent citations, reflecting overreliance on AI-generated research.[1] But this is easily avoidable; verifying citations is straightforward (and expected) even in a pre-AI world. Indeed, many jurisdictions are now requiring parties to take additional steps to verify their citations. For example, Ontario requires that certificates be signed by lawyers as part of their legal submissions, confirming the authenticity of every authority cited.[2] Other jurisdictions (including the Federal Court and courts in Nova Scotia, Yukon, and Manitoba) require lawyers to disclose whether and how AI has been used.[3] A more challenging question looms: what happens when the foundation on which a case is built – the evidence itself – cannot be trusted?
What is deepfake evidence?
Deepfakes are AI-generated media that look or sound realistic but are manipulated to depict something that is not real or did not occur.[4] In the legal context, deepfakes can refer to tampered digital evidence such as AI-manipulated videos, fabricated images or documents, and voice cloning.[5]
What are the risks of deepfake evidence?
In R v Medow, Justice Brock Jones of the Ontario Court of Justice took judicial notice of the ‘widespread proliferation of AI technology capable of producing realistic deepfake videos.’ These deepfakes can be ‘highly deceptive, making it challenging to discern what is authentic and what is fiction,’ presenting a ‘potentially serious concern to the integrity of our justice system.’[6] In that case, the authenticity of audiovisual evidence of an altercation between the police and the accused was called into question. Justice Jones ultimately declined to infer that the video was digitally altered with the intention of falsely incriminating the accused.[7]
Similarly, in Breton v Ministry of Health and Social Services, the Commission d'accès à l'information du Québec acknowledged that AI-generated evidence can mislead courts and stakeholders, thereby compromising the integrity of the judicial process.[8] Despite this, the Commission admitted certain AI-generated documents, citing sufficient corroborating source documents, probative value, and flexibility in the evidentiary rules of administrative law.[9]
These cases illuminate that the deepfake problem arises as two sides of the same coin:
- Evidence could be convincingly altered or faked. Photographic and video evidence can be persuasive. But how do we know it is real? Deepfakes are only getting better and harder to detect. As we do not yet have reliable technology for identifying deepfakes, challenging authenticity can be expensive and time-consuming. This may result in false evidence slipping through the cracks and influencing legal decisions.
- Real evidence could be falsely alleged to be altered or faked. We know that, at least so far as the digital world goes, we cannot trust everything we see. But litigants may take advantage of this skepticism, casting doubt on genuine evidence that – if admitted – would hurt their case. This opportunism has been referred to as the ‘liar’s dividend.’[10] Resolving such allegations can result in additional costs and delay.
What are we doing to combat deepfake evidence?
Under section 31.1 of the Canada Evidence Act, a party seeking to introduce digital evidence bears the burden of persuading the court that the document is what it is ‘purported to be.’[11] Similar principles apply to civil matters across Canada.[12] Admissibility, however, is not an onerous standard. While admissibility is not the final hurdle – as the decision-maker still must assess the evidence’s reliability and the weight it should be afforded – there is still risk that falsified evidence could influence legal outcomes, particularly where juries may be swayed by convincing deepfakes.
Courts have acknowledged this challenge. In R v Medow, Justice Jones cautioned that as falsified digital evidence becomes more convincing, ‘courts must ensure that the authentication voir dire required for digital evidence is not rendered meaningless.’[13] Similarly, Justice Nordheimer of the Ontario Court of Appeal noted in R v Aslami that certain types of digital evidence may require expert authentication, as there are ‘too many ways for an individual, who is of a mind to do so, to make electronic evidence appear to be something other than what it is.’[14]
Some jurisdictions are already taking steps to resist the proliferation of deepfakes. In Ontario, the AI subcommittee of the Civil Rules Committee is exploring amendments to Ontario’s Rules of Civil Procedure to provide litigants with tools for challenging the authenticity of evidence alleged to be generated or modified by AI.[15] In the United States, the National Centre for State Courts has created bench cards for judges and court staff to help ask the right questions when unacknowledged AI use is suspected.[16] Though these represent an important starting point, more may be needed to help the law adapt to ‘the stark realities of these ever-changing technologies and their capacity to negatively impact on the truth-seeking function’ of the litigation process.[17]
Notes
Damien Charlotin, AI Hallucination Cases.
See, for example, Ontario Rules of Civil Procedure, RRO 1990, Reg 194, Rule 4.06.1(2.1).
See, for example, Provincial Court of Nova Scotia, ‘Use of Artificial Intelligence (AI) and Protecting the Integrity of Court Submissions in Provincial Court’ (October 27, 2023); Supreme Court of Yukon, ‘Practice Direction General-29, Use of Artificial Intelligence Tools’ (June 26, 2023); Court of King’s Bench of Manitoba, ‘Practice Direction Re: Use of Artificial Intelligence in Court Submissions’ (June 23, 2023); Federal Court, ‘Notice to the Parties and to the Profession: The Use of Artificial Intelligence in Court Proceedings’ (May 7, 2024).
Adam Armstrong, Molly Reynolds, Lauren Nickerson, and Tristan Montag, ‘Deepfakes are on the rise – are you prepared?’ (2025).
Clementina Salvi, ‘Deepfake Evidence in Criminal Proceedings’ (2024).
R v Medow, 2025 ONCJ 661 at paras 54-55.
R v Medow, 2025 ONCJ 661 at para 61. See also Head v John Doe, 2026 BCSC 184 at para 50.
Breton v Ministry of Health and Social Services, 2025 QCCAI 280 at paras 18-20.
Breton v Ministry of Health and Social Services, 2025 QCCAI 280 at paras 27-35.
Robert Chesney and Danielle Keats Citron, ‘Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security’, SSRN Electronic Journal (2018).
Canada Evidence Act, RSC 1985, c C-5, s 31.1.
See, for example, Ontario Evidence Act, RSO 1990, c E.23, s 34.1; Alberta Evidence Act, RSA 2000, c A-18, ss 41.1-41.8; Manitoba Evidence Act, CCSM c E150, ss 51.1-51.8; Saskatchewan Evidence Act, SS 2006, c E-11.2, s 54; Nova Scotia Evidence Act, RSNS 1989, c 154, ss 23A-23H.
R v Medow, 2025 ONCJ 661 at para 73.
R v Aslami, 2021 ONCA 249 at para 30.
Civil Rules Committee, ‘Consultation on proposals for Rules of Civil Procedure relating to evidence and Artificial Intelligence’ (2025).
National Centre for State Courts, Evaluating Unacknowledged AI-Generated Evidence.
R v Medow, 2025 ONCJ 661 at para 73.