Actual intelligence v artificial intelligence – the dangers of AI-generated evidence and its ethical implications
Tuesday 14 April 2026
Aishwarya Kaushiq
BTG Advaya, New Delhi
aishwarya.kaushiq@btgadvaya.com
Forms of AI usage in generating evidence
Evidence has long been the currency of justice, traditionally consisting of human testimony, documentary proof, forensic material, and expert reasoning used to establish facts before courts. Artificial intelligence now represents a new entrant into this evidentiary landscape and is increasingly embedded in modern evidentiary processes across courts and tribunals.
In India, recently, a constitutional court prompted AI to identify comparative global policy trends in an ongoing public interest litigation.1 Similarly, plaintiffs in a suit have used AI-generated outputs to establish brand reputation and distinctiveness in intellectual property disputes.2 Prosecutors have relied on AI software to interpret DNA mixtures as evidence of guilt.3 Experts use AI to corroborate their original testimonies4 , and cross-check calculations.5 Fact witnesses use AI outputs to discredit opposing evidence.6 Lawyers have even used AI to justify their hourly billing rates in actions for attorney costs and fees.7
At the same time, courts have encountered clear instances of misuse. In a software authorship dispute, generative AI was used to fabricate historical digital records to establish ideation. 8 In another case involving a constitutional challenge to deepfake regulations, an expert relied on AI-generated material that cited non-existent academic sources.9
While instances of ethical utilisation can be contrasted with those of misuse, AI’s role in evidence is more complex than this simple binary. Mary D.M. Fan introduces an interesting spectrum that ranges from AI-enhanced evidence, ie AI used to improve the quality of evidence by 'clarifying hard-to-discern details or words…eliminating noise; auto-transcription…', to AI-generated evidence, ie fabricated materials and deepfakes.10 While the former facilitates justice, the latter sabotages it.
Risks associated with leading AI-generated evidence
The primary concern with AI-generated evidence lies not merely in the possibility of misuse, but in its inherent unreliability.
First, AI outputs are inherently prompt-sensitive. As an Indian court remarked, they are moulded by the 'nature and structure of the query'11 – a problem further aggravated when the user who relies on AI-generated evidence fails to recall the input or prompt fed into the AI system.12
Second, output inconsistency raises serious concerns regarding the reliability of methods employed by AI. A New York Surrogate Court observed that a question soliciting investment calculations fed into the same AI chatbot prompted three different responses through three different devices.13
Third, and most critically, AI systems lack transparency of sources and methodology. Unlike human sources that can be easily traced, AI-generated evidence lacks identifiable lineage, and is the output of several unverifiable complex processes.14 ChatGPT-generated evidence has been rejected by courts for failing to reveal whether it considered a 'real and relevant data point'.15 As an example, a UK court contrasted ChatGPT with a human expert witness who 'would be required to explain their expertise, the sources that they rely upon and the methodology that they applied…'16
An Indian court similarly remarked that AI 'does not possess consciousness, moral reasoning, or the capacity to weigh evidence', and expressed its preference for 'actual intelligence' over 'artificial intelligence'.17 A New York Surrogate Court similarly rejected evidence generated by Microsoft Copilot because it had 'no objective understanding as to how Copilot works…'18
Worse still, AI-generated expert evidence may just rely on non-existent sources. This risk is not hypothetical, it has already materialised. In a constitutional challenge before an American court, for example, an 'expert on the dangers of AI and misinformation' submitted a declaration generated by GPT-4o that used fake citations in a case ironically revolving around the dangers of AI.19
The Delhi High Court has also urged caution towards the 'possibilities of incorrect responses…imaginative data etc. generated by AI chatbots'.20 This warning becomes even more important with recent instances of suits being instituted based on deepfake evidence.21
The opacity of AI-generated evidence is compounded by the fact that AI systems themselves disclaim reliability and require human supervision. Courts have noted such disclaimers, treating them as undermining the independent evidentiary value of AI outputs.22
Consequences of misleading AI-generated evidence
The risks of AI-generated evidence translate directly into procedural and substantive consequences.
A useful distinction may be drawn between acknowledged AI evidence which is openly disclosed as evidence generated by AI, and unacknowledged AI evidence which is generated by AI but is presented as being uninfluenced by AI.23 The latter poses a significantly greater threat, as courts may unknowingly rely on fabricated material. An Indian court once remarked that the use of AI-generated material compels opposing parties to “expend time and resources in exposing the falsity…” and diverts judicial time from adjudicatory functions.24
Court-room exposure of false AI misuse can have severe consequences resulting in total exclusion of evidence, reputational damage and potential perjury-related punishments. These consequences may occur even where the underlying claims presented by AI-generated evidence are otherwise accurate.25
Gradual but sceptical acceptance of acknowledged AI-generated evidence
Although courts assign 'little weight' to AI-generated evidence and urge the cultivation of 'healthy scepticism'26 towards it, there has been judicial consensus in favour of specific AI technologies. For example, American courts have affirmed the admissibility of DNA mixture interpretation evidence generated by an AI-driven system called TrueAllele.27 Indian courts, as well, have not hesitated from expressly referring to and reproducing ChatGPT results to study international policy28 and conduct jurisprudential analyses of international bail standards29 and plaintiffs’ entitlement to specific remedies.30
While AI is being looked at sceptically, courts are hopeful that advancement can better position AI for evidentiary reliability. The Delhi High Court does not believe that the 'present stage of technological development' allows AI to substitute human intelligence but recognises the role it can play in 'preliminary understanding' and 'preliminary research'.31 The courts of New York, as well, recognise AI’s 'potential to revolutionise legal practice for the better', but caution that experts should not 'abdicate their independent judgment and critical thinking skills in favour of ready-made AI-generated standards'.32
Conclusion
AI is set to become an increasingly integral feature of litigation in India, particularly as courts deal with growing volumes of digital evidence in commercial disputes. However, the evidentiary value of AI-generated material will depend on demonstrable reliability, transparency of methodology, and human verification. Until such safeguards are clearly established, Indian courts, operating within the framework of the Indian Evidence Act, 1872, are likely to treat AI-generated evidence with caution.
At the same time, AI offers meaningful opportunities to strengthen evidentiary processes. Tools for handwriting recognition, translation across multiple Indian languages, restoration of damaged documents, recovery of deleted files, noise reduction in audio recordings, and enhancement of video evidence can significantly improve the accessibility and quality of evidence in complex litigation.
Moreover, use of AI software can be instrumental to enforcement agencies in unearthing large-scale illegal rackets, as was recently done by the Income Tax Department in India to expose a tax evasion racket of more than USD 8.3 billion. Thus, there is an emerging need for courts to be open-minded to genuine AI-assisted evidence that substantially advances a cause of action.
To best utilise these capabilities while preserving evidentiary integrity, courts and authorities will need to frame and enforce standards governing the use, verification, and disclosure of AI-enhanced or AI-generated evidence. The effectiveness of AI in litigation will ultimately depend not on the technology itself, but on the robustness of the frameworks that regulate its use.
Notes
1 Dr. S. Ganapathy v. Union of India, Kerala High Court, 2025 KER 10546.
2 Chrisitan Louboutin Sas v. M/s The Shoe Boutique, Delhi High Court, 2023 SCC Online Del 5295
3 People v. Wakefield, Court of Appeals of New York, 38 N.Y. 3d 367.
4 Joseph Ferlito v. Harbor Freight Tools USA, Inc., United States District Court, E.D. New York, Civil Action No. 20-5615 (GRB) (SIL).
5 In re Weber, New York Surrogate Court, 2024 N.Y. Slip Op. 24258
6 Oakley v. Information Commissioner, First-tier Tribunal General Regulatory Chamber Information Rights, [2024] UKFTT 315 (GRC) and ED Surridge v. Information Commissioner, First-tier Tribunal General Regulatory Chamber Information Rights, [2024] UKFTT 00597 (GRC).
7 J.G. v. N.Y.C. Dep’t of Education, United States District Court, S.D. New York, 23 Civ. 959 (PAE).
8 Crypto Open Patent Alliance v. Dr. Craig Steven Wright, Business and Property Courts of England, [2024] EWHC 1198 (Ch).
9 Christopher Kohls v. Keith Ellison, United States District Court, District of Minnesota, Case No. 24-cv-3754 (LMP/DLM).
10 Mary D.M. Fan, AI-Enhanced Evidence, Forthcoming, Boston University Law Review.
11 Christian Louboutin (supra), para 28.
12 In re Weber (supra).
13 In re Weber (supra).
14 Evelina Gentry, The Challenges of Integrating AI-Generated Evidence Into the Legal System, Akerman, 12 June 2024.
15 J.G. v. N.Y.C (supra).
16 Oakley v. Information Commissioner (supra).
17 Gummadi Usha Rani v. Sure Mallikarjuna Rao, Andhra Pradesh High Court, Civil Revision Petition No. 2487 of 2025.
18 In re Weber (supra).
19 Kohls (supra).
20 Christian Louboutin (supra), para 28.
21 Mendones v. Cushman & Wakefield, Inc., Superior Court of California, County of Alameda, No. 23CV028772
22 In re Weber (supra).
23 Natalie Runyon, Deepfakes on trial: How judges are navigating AI evidence authentication, Thomson Reuters Institute, 8 May 2025.
24 Gummadi (supra).
25 Kohls (supra).
26 Lawrence Aponte v. Portfolio Recovery Associates, LLC, United States District Court, Eastern District of Arkansas, No. 4:24-cv-1053-DPM.
27 People v. Wakefield (supra).
28 Dr. S. Ganapathy v. Union of India (supra).
29 Jaswinder Singh v. State of Punjab, Punjab and Haryana High Court, 2023 PHHC 44541.
30 Subhash Chakraborty v. Sandhya Deb, Tripura High Court, 2024 SCC Online Tri 500.
31 Christian Louboutin (supra), para 28.
32 Joseph Ferlito (supra).
33 Economic Times, Rs 7,00,00,00,00,00,000 and counting: How taxmen filtered 60,00,00,00,00,00,000 bytes biryani bill data to detect mother of all GST scams, 19 February 2026.