Portable integrity: authenticating cross-border digital evidence and defending AI review in commercial disputes
Tuesday 14 April 2026
Luciano Castelli
LCA Studio Legale, Milan
luciano.castelli@lcalex.it
Not long ago, the critical evidence in a commercial dispute was usually a signed contract, a board resolution, or perhaps a handwritten note. Today, the picture looks very different. Email threads stored on servers in multiple jurisdictions, collaborative documents with dozens of revision histories, messaging apps that auto-delete-these now form the evidentiary backbone of complex cases ¹. Disputes can turn on metadata timestamps; cases can collapse when key communications vanish from a cloud platform without warning. The challenge is no longer simply finding relevant documents; it is establishing their authenticity and reliability across legal systems that rarely speak the same language.
Practitioners seeking evidence across borders face a frustrating maze. International legal frameworks point in different directions; technology providers operate according to their own rules; and privacy regulations − GDPR foremost among them −impose constraints that vary dramatically by jurisdiction ². Consider a not-uncommon scenario: a party spends months navigating the gap between a US-style discovery request and European data protection requirements, only to find that the cloud provider's terms of service create yet another layer of complexity. The lesson is clear: mapping available routes requires patience, creativity, and a willingness to accept that the path forward is rarely straight.
Speed matters −often more than anything else. By the time a Hague Evidence Convention request winds its way through official channels, critical data may have been deleted, overwritten, or moved beyond reach ³. Experienced practitioners know to pursue provisional measures immediately, even while longer-term strategies are being developed. But anticipate resistance: requests for enterprise account access or contractual audit rights will almost certainly be met with jurisdictional objections and GDPR-based limitations. The key is demonstrating legitimate purpose and proportionate scope from the outset: vague or overreaching requests simply give opposing counsel ammunition to delay ⁴.
What makes digital evidence admissible? The scholarship converges on familiar criteria − relevance, authenticity, reliability −but applying these concepts to a spreadsheet recovered from a corrupted hard drive or a WhatsApp conversation exported as a PDF is far from straightforward ⁵. Courts want to see documented collection procedures, clear provenance records, and preservation indicators: metadata, timestamps, hash values. Chain of custody must be unbroken from collection to hearing. In practice, this means working closely with forensic experts from day one, not bringing them in as an afterthought when authenticity is challenged.
Evidence handling protocols should be written before the dispute, not invented afterwards to justify what was already done. Data transfers between systems risk metadata loss; third-party processors may introduce steps that are invisible to the legal team; and questions about who controlled the data at each stage can become unanswerable. The solution is building what might be called an 'integrity narrative'− a documented, auditable trail showing who obtained what, how, and when. Standards-based practices aligned with ISO/IEC frameworks help ⁶, but the real goal is creating a story that will hold up under cross-examination.
Procedural fairness is easy to proclaim and difficult to achieve ⁷. Consider the asymmetry: one party controls the servers and can search its own records instantly; the other must navigate formal authorisation procedures that take months. One side can afford sophisticated AI-powered review tools; the other cannot. Critical metadata may sit on servers in jurisdictions with restrictive privacy regimes, effectively beyond reach. It is not uncommon for a party to know exactly what evidence exists yet find that proving its completeness and authenticity requires resources that nearly exceed the value of the claim itself. Equality of arms is not just a principle; it requires concrete mechanisms that level the playing field.
How should tribunals respond? Several tools are available, though underused. Cost-shifting can address resource imbalances; neutral experts can be appointed to collect or verify evidence; adverse inferences can be drawn where a party has frustrated legitimate discovery. Perhaps most importantly, tribunals should not hesitate to stage disclosure-requiring production in phases, with targeted requests following initial document review. The goal is not perfect symmetry, which is unattainable, but sufficient fairness to allow each side a meaningful opportunity to challenge the other's evidence.
Artificial intelligence is transforming evidence review, and there is no turning back. Deduplication, clustering, translation, entity extraction, predictive coding − these tools can process volumes that would take human reviewers months or years. But the enthusiasm should be tempered with caution ⁸. The question tribunals increasingly ask is not 'did you use AI?' but 'can you explain and defend how you used it?' This means documented procedures, verified inputs, transparent parameters, and quality-control results that can withstand scrutiny from opposing counsel.
What does defensible AI review look like in practice? One useful approach is preparing what might be called a 'reproducibility dossier': a record documenting the tool used (including version), configuration settings, preprocessing steps, dataset boundaries, sampling methodology, and quality-control checks. When challenges arise − and they will − this dossier becomes the foundation for responding. Contestability also requires practical accommodations: allowing opposing experts access under confidentiality protections, agreeing on sampling protocols for validation, or commissioning neutral audits when stakes are high. The goal is not to prove that the AI was perfect, but to show that its use was reasonable and its outputs reliable in the specific context.
The threat landscape has shifted. A decade ago, the concern was missing or destroyed evidence; today, fabricated evidence poses an equal risk. Generative AI can produce convincing emails, chat logs, images, even audio recordings. Commercially available tools enable subtle alterations to authentic documents that are nearly impossible to detect visually. Authentication can no longer rely on surface plausibility. For critical materials, a three-pronged approach is advisable: provenance verification (where did this come from, and how was it acquired?), integrity verification (hash values, metadata analysis), and contextual corroboration (do independent sources confirm the content?). Where provenance is weak, tribunals should not hesitate to require additional corroboration or appoint technical experts.
Hacked and leaked materials present a particular dilemma. The evidence may be authentic and highly probative but obtained through illegal means. Should it be excluded on principle, or admitted despite its tainted origins? Arbitral practice has generally avoided bright-line rules, favouring instead a proportionality analysis: where did the evidence come from, and is the source credible? Can reliability be verified through integrity checks or independent corroboration? And critically, what prejudice would admission cause, and can protective measures mitigate it ⁹? These are difficult questions without easy answers, but a structured framework at least ensures they are asked systematically.
Experienced arbitrators understand that admissibility, weight, and remedies are distinct questions. A tribunal may admit evidence while imposing confidentiality restrictions; it may allow limited disclosure solely for authentication purposes; it may hear the evidence but ultimately give it little weight if provenance remains doubtful. Outright exclusion is reserved for cases where the evidence was unlawfully obtained and admission would cause genuine unfairness − a high bar, but one that preserves the tribunal's integrity. The art lies in calibrating these tools to the circumstances of each case.
We are still in the early chapters of this story. Cross-border digital evidence and AI-assisted review are now routine features of complex commercial disputes, yet the procedural frameworks governing them remain underdeveloped. The divergence between legal systems, the opacity of technology providers, and the rapid evolution of AI tools all challenge traditional assumptions about how fact-finding should work. Rather than waiting for perfect solutions, practitioners and tribunals would do well to focus on building portable integrity protocols − structured, documented, defensible processes that can travel across borders and withstand challenge. The future of evidence will be shaped not by those who resist change, but by those who learn to navigate it.
Notes
- Abraha, H. (2020). Law enforcement access to electronic evidence across borders: mapping policy approaches and emerging reform initiatives. International Journal of Law and Information Technology, 29, 118-153
- Rojszczak, M. (2022). e-Evidence Cooperation in Criminal Matters from an EU Perspective. The Modern Law Review.
- P, S., & W, B. (2021). Obtaining Evidence across National Boundaries. In Ristau's International Judicial Assistance
- One example is Regulation (EU) 2023/1543 on European Production and Preservation Orders. Another case comes from the Council of Europe adding a second protocol to the Budapest Convention. This update pushes cooperation and data sharing in digital evidence forward.
- Stoykova, R. (2021). Digital evidence: Unaddressed threats to fairness and the presumption of innocence. Computer Law & Security Review, 42.
- ISO/IEC 27037:2012; ISO/IEC 27050-1:2019; Pestana, G., Antunes, W., & Carvalho, J. (2023). Digital Chain of Custody Operational Framework. IEEE TechDefense 2023, 417-422.
- Stoykova, R. (2023). The right to a fair trial and digital evidence: Reconsidering the place of forensic evidence in criminal investigations. Computer Law & Security Review, 49.
- Durán, J., van der Vloed, D., & Ruifrok, A. (2024). Computational reliability and AI-driven forensic outputs. Forensic Science International: Synergy, 9, 100554. Grimm, P., Grossman, M., & Cormack, G. (2021). Artificial Intelligence as Evidence. Northwestern Journal of Technology and Intellectual Property, 19(1), 9-83.
- Ferreira, F., & Gromova, E. (2023). Digital evidence in international arbitration: Admissibility of leaked and hacked materials. International Journal for the Semiotics of Law, 37, 903-922.