AI-generated evidence: The Brazilian landscape

Tuesday 14 April 2026

Priscilla Villa Nova de Oliveira
Veirano Advogados, São Paulo
priscilla.villanova@veirano.com.br

Catarina de Figueiredo Ramos
Veirano Advogados, São Paulo
catarina.ramos@veirano.com.br

The use of AI by and with the Brazilian judiciary

The use of AI has become increasingly prevalent.

According to a study conducted by the National Justice Council (Conselho Nacional de Justiça – CNJ), a Brazilian institution established by the Federal Constitution of Brazil that functions as the oversight body of the Judiciary with jurisdiction throughout the national territory, Brazilian courts have been incorporating such tools with the purpose of streamlining procedural flow, automating repetitive tasks and assisting in the analysis and decision-making processes. The study, which included the participation of 92 courts and councils across the country, indicates that 45.8 per cent of these institutions report employing some form of generative AI system in their activities.1 

This movement is reflected in various operational initiatives within the judicial context. One of the most notable is the Vitor System, employed by Supremo Tribunal Federal (Supreme Federal Court) – the Brazilian Supreme Court responsible for adjudicating constitutional matters – to support the screening of extraordinary appeals and the identification of cases linked to issues of general repercussion, which are matters of national relevance that may also serve as precedents for decisions in similar proceedings.

Other examples include the Athos and Logos Systems, developed by Superior Tribunal de Justiça (Superior Court of Justice) – the Brazilian Superior Court responsible for standardising the interpretation of infra-constitutional legislation throughout the country. These tools automate the analysis of appeal admissibility, assisting in the verification of compliance with procedural requirements, given that the Superior Court of Justice’s jurisdiction is limited and does not extend to the re-examination of evidence or facts already adjudicated by lower courts.

Cognisant of these transformations and the challenges arising from the use of such technology, the CNJ published, in March 2025, Resolution No 615/2025, which establishes guidelines for the development, implementation and use of AI systems within the Brazilian judiciary. The resolution is structured into 12 chapters and is predicated on the acknowledgment of AI’s innovative impact, while simultaneously seeking to preserve the integrity of judicial activity through the definition of fundamental principles that must guide its application.

In this context, the resolution expressly provides that AI shall not replace the role of judges but must operate as a tool to support judicial activity. To ensure the responsible implementation of this technology, the resolution establishes a risk categorisation system for AI use, stipulating that courts conduct evaluations of AI solutions to determine their risk level.

The Risk Classification Annex of the resolution divides applications into two categories: high risk and low risk. Low-risk applications correspond to the use of AI for auxiliary functions, such as information extraction, case management and other administrative activities, and are subject to less stringent regulatory requirements. This category reflects the most common applications of AI within the Brazilian judiciary today, acknowledging that, although they present some risks, such risks are manageable through mitigation measures.

On the other hand, systems classified as high risk are those capable of directly influencing the analysis or adjudication of judicial proceedings, such as tools designed for the identification of behavioural patterns or the analysis of evidence, and, for this reason, are subject to more rigorous control mechanisms, including technical audits and mitigation measures aimed at preventing discriminatory biases.

The Resolution also establishes the creation of the National Committee on Artificial Intelligence in the Judiciary (Comitê Nacional de Inteligência Artificial do Judiciário – CNIAJ), which functions as a strategic governance body for these technologies within the Brazilian justice system. Composed of judges, technology experts and representatives of civil society, the Committee is tasked with supervising the implementation of AI policy in the Judiciary, assessing risks, promoting updates to regulatory guidelines and ensuring that the adopted solutions comply with appropriate ethical, legal, and technical standards.

In the legislative context, Bill No 2,338/2023 is currently under review in the Brazilian National Congress. This bill establishes general, nationwide rules for the development, implementation and responsible use of AI systems. The proposed legislation seeks to reconcile an approach centred on fundamental rights, including due process and the right to adversarial proceedings, with instruments for governance and oversight.

The bill also institutes a risk classification system associated with the use of these technologies, distinguishing between excessive-risk and high-risk applications. The bill sets forth specific criteria for each classification while allowing for the possibility of updating this list should new circumstances arise that require dedicated regulatory discipline.

Applications classified as excessive risk, as provided in Article 14, correspond to three scenarios in which the use of AI shall be prohibited in Brazil. These include, among others, the use of systems intended to induce or manipulate natural persons to engage in behaviours that are harmful or dangerous to their health or safety, as well as the use of technology by public authorities for the exploitation, classification or evaluation of citizens through automated mechanisms.

High-risk applications, on the other hand, are addressed in Article 17, which enumerates 14 hypotheses. Such systems may be developed and employed provided that their purposes and contexts comply with the provisions of the law, as well as the governance and risk-mitigation mechanisms established therein. Among the enumerated hypotheses, subsection VII of Article 17 expressly contemplates the use of AI in the administration of justice, including systems designed to assist judicial authorities in the investigation of facts and the application of the law.

Brazilian legislation, however, already provides for, either indirectly or in a fragmented manner, the regulation of AI use. Such provisions can be found in the Brazilian Internet Bill of Rights (Marco Civil da Internet – Law No 12,965/2014), which establishes principles, guarantees, rights, and duties for internet use in the country, as well as the General Data Protection Law (Lei Geral de Proteção de Dados Pessoais – Law No 13,709/2018), which regulates the processing of personal data and imposes significant limits on the use of technologies based on automated information processing.

By the same token, Law No 12,737/2012, known as the Cybercrime Law, classifies online offences, and Article 218-C of the Brazilian Penal Code criminalises the disclosure or production of intimate or sexual content without consent – a provision that may cover certain forms of digital manipulation, such as pornographic deepfakes.

Another relevant debate within the Brazilian regulatory landscape involves Bill No 2,630/2020, known as the ‘Fake News Bill’, which seeks to establish rules for transparency and accountability of digital platforms in the circulation of content on the internet. Although the bill does not address AI exclusively, it is directly related to the subject as it tackles issues such as the automated dissemination of disinformation, the use of algorithms for content amplification and the digital manipulation of information – practices frequently enhanced by AI systems.

Despite these regulatory initiatives, Brazilian courts face ongoing practical challenges concerning the use of AI in judicial proceedings. It is not uncommon for courts to encounter AI-generated content in concrete cases.

One example is Supreme Federal Court Claim No 78,890. The case challenged the decision which upheld an administrative sanction applied to a public servant despite the existence of a final acquittal in a criminal proceeding. The claimant argued that this decision violated precedents of the Supreme Federal Court.

During the examination of the claim, however, it was discovered that the petition submitted by the claimant contained nonexistent precedents, allegedly generated by AI. In this regard, the Supreme Federal Court recognised that, although the use of AI by the parties is lawful, its improper use in judicial proceedings, such as inventing binding precedents, constitutes a reprehensible act and bad faith. Consequently, the court ordered the claimant to pay double court fees and directed that the matter be reported to the Federal Council of the Brazilian Bar Association (Ordem dos Advogados do Brasil).

Moreover, the use of AI in electoral campaigns has raised serious concerns at the Superior Electoral Court, leading to the publication of Resolution No 23,732, which mandates the identification of AI technology in campaign materials and prohibits the use of deepfakes.

This increasing presence of AI-generated digital content demonstrates that the Brazilian judiciary is already confronted with evidence whose authenticity and reliability are difficult to ascertain. This complexity raises fundamental questions, including how to ensure that AI-produced evidence submitted in court accurately reflects real-world facts.

AI-generated evidence as a threat to evidentiary reliability

With technological advancements, evidence has ceased to be tied exclusively to the materiality of paper and has also assumed a digital form. Legally relevant facts are increasingly recorded through information systems. In this context, the notion of digital evidence has been consolidated.2 

Among the most common examples of digital evidence in Brazilian judicial proceedings are electronically signed documents, screenshots of messages exchanged via applications such as WhatsApp, emails, recordings, images and similar materials. Although digital evidence is not expressly provided for in the Brazilian legal system, legal doctrine and scholarly commentary recognise that, provided the principles of legality and lawfulness are observed, such evidence may be incorporated into proceedings, serving as an instrument for the judge’s free and reasoned evaluation in assessing the evidence.3

In this context, Brazilian scholars identify two fundamental requirements for its validity: authenticity and integrity. Authenticity ensures that the recorded facts correspond to reality and were produced by the indicated authors, while integrity guarantees that the content has not been altered since its creation.4 

Historically, in Brazil, assurance regarding the authorship and integrity of evidence presented in court was provided by notary offices. When a party wished to prove the authenticity of a signature in court, recourse could be made to notarisation by a notary, an agent vested with public faith. Similarly, the authentication of copies certifies that the documentary evidence presented corresponds faithfully to the original.

At present, one method to verify potentially manipulated digital content is through technical expertise. Although this practice is not yet widespread in Brazilian proceedings, it involves analysing elements of the file itself – such as metadata, compression patterns and pixel inconsistencies – that may indicate alterations to the original material. In more complex cases, AI mechanisms may also be employed to verify the veracity of manipulated content generated by AI.5

Additionally, technological mechanisms have been reinventing the manner in which the authenticity and integrity of digital evidence are ensured, assuming a function analogous to traditional certification. Platforms record technical information associated with electronic signatures – including the signatory’s identification, date, time and audit trail – while tools operate in the collection and preservation of auditable digital evidence. These instruments function as alternatives to notarial certification and are recognised by Brazilian courts as suitable means of evidentiary reinforcement in the digital context.

An example is the Gov.br platform, developed by the Brazilian Federal Government, which offers citizens the possibility of signing digital documents through a centralised authentication system. The initiative seeks to simplify access to public services and facilitate the formalisation of electronic acts.

This movement is supported by Brazilian legislation. Law No 14,063/2020 establishes rules for the use of electronic signatures in interactions with public entities, in corporate acts and in health matters, as well as regarding software licences developed by public entities, classifying them according to different levels of security and recognising their legal validity.

However, the coexistence of different types of electronic signatures means that their acceptance in judicial practice is not always uniform. In several cases, courts tend to attribute a higher degree of reliability to qualified electronic signatures, which are linked to a digital certificate issued by a Certifying Authority accredited under the Brazilian Public Key Infrastructure (ICP-Brasil), revealing that the probative value of other digital signature mechanisms remains a subject of debate in Brazilian jurisprudence.

It is in this context, already marked by challenges regarding the reliability of digital evidence, that AI intensifies evidentiary risks in an unprecedented manner. AI systems are capable of generating complete content – messages, audios, images, videos and documents – that appear truthful but may not correspond to actual facts.

This capability, combined with the opacity of algorithms and the occurrence of so-called ‘hallucinations’,6 significantly exacerbates the difficulty of verifying the truthfulness of material presented in court.7 

The risk is evident: AI-produced digital evidence may enter judicial proceedings appearing truthful, but devoid of any factual basis. Recent cases demonstrate that courts have already encountered attempts to use AI-generated content as evidentiary elements.

In Interlocutory Appeal No 5040183-11.2025.8.24.0000/SC, Judge Alexandre Morais da Rosa of the Court of Justice of Santa Catarina denied a request for a preliminary injunction submitted by a candidate for the Public Officer Training Course of the Military Police, who sought to demonstrate her physical aptitude via a video edited with the assistance of ChatGPT.

In his decision, the judge highlighted that the video did not meet minimum documentation requirements capable of ensuring its authenticity, integrity and auditability, and that the mere use of a digital stopwatch overlaid on the images, without a technical report, editing logs, digital certifications or other traceability evidence, did not confer legal probative value.

Furthermore, he emphasised that the technical standards of the Brazilian Association of Technical Standards (ABNT), responsible for standardising national production standards in Brazil, should be observed regarding the production, recording and validation of digital and AI content. He also highlighted the need for AI-generated evidence to be accompanied by a technical authenticity report, digital certification and a legal opinion regarding its admissibility as evidence.

Similarly, the Superior Court of Justice, in a case involving WhatsApp message screenshots extracted from mobile devices as evidence, held that the reliability of digital evidence requires complete documentation of the stages of data acquisition, ensuring authenticity, integrity and context of the information. Without such precautions, the court noted that evidentiary value could be drastically reduced or even nullified.8

The ruling emphasised that the probative reliability of digital evidence does not depend solely on the existence of the evidence itself but also on the ability to securely reconstruct all stages of its collection, analysis and submission to court. The use of digital evidence without a proper demonstration of the chain of custody, understood as the set of procedures adopted to guarantee the maintenance of integrity and completeness of the evidence, is inadmissible.

These precedents demonstrate that digital evidence, particularly AI-generated evidence, is already a reality in the Brazilian judiciary. Courts will need to continually adopt new techniques to ensure transparency, traceability and integrity of such evidence, enabling judges to exercise their evaluation in a reasoned and secure manner while preserving due process, adversarial rights and equality of arms between the parties.

Conclusion

The incorporation of new technologies into the Brazilian judiciary is inevitable and can contribute significantly to the efficiency and quality of proceedings. However, it is essential that their use be guided by clear rules and governance mechanisms, ensuring that fundamental principles – such as the adversarial system, the right to a broad defence and the integrity of evidence – are preserved. Recent experience demonstrates that AI-generated content can compromise evidentiary reliability if not properly verified and audited. Therefore, it is imperative that the Judiciary and legislators establish specific guidelines for the production, analysis and use of digital evidence generated by AI, thereby ensuring traceability, authenticity and legal certainty without prejudicing the rights of the parties.

Notes

1. Conselho Nacional de Justiça, ‘Painel de Pesquisa sobre Inteligência Artificial 2024’ https://paineisanalytics.cnj.jus.br/single/?appid=51977be5-96d0-4362-98ff-ed3eb3337781&sheet=maXvpqE&theme=horizon&lang=pt- accessed 10 April 2026.
2. It is worth noting that the concept of digital evidence has different meanings in legal scholarship. Among them, the definition proposed by Thamay and Tamer stands out. They argue that digital evidence may have two meanings: ‘A first, according to which digital evidence can be understood as the demonstration of a fact that occurred in digital environments, that is, a fact supported by the use of a digital medium. And a second, in which, although the fact itself did not occur in a digital environment, the demonstration of its occurrence may be made through digital means.’ Rennan Thamay and Maurício Tamer, Provas no Direito Digital: conceito da prova digital, procedimentos e provas digitais em espécie (São Paulo: Thomson Reuters Brasil, 2022), 32.
3. Diego Fontenele Lemos, Larissa Homsi Cavalcante and Rafael Gonçalves Mota, ‘A prova digital no direito processual brasileiro’ (2021) 13(1) Revista Acadêmica Escola Superior do Ministério Público do Ceará 13.
4. Lemos, Cavalcante and Mota, ‘A prova digital no direito processual brasileiro’ (2021) 13(1) Revista Acadêmica Escola Superior do Ministério Público do Ceará 13.
5. Andres Vera, ‘Mais que deepfake: a perícia forense está preparada para a inteligência artificial generativa?’ JOTA (8 May 2023) https://www.jota.info/justica/mais-que-deepfake-a-pericia-forense-esta-preparada-para-a-inteligencia-artificial-generativa accessed 10 April 2026.
6. ‘AI hallucination usually occurs due to adversarial examples such as varied input data that confound the AI systems into misclassifying and misinterpreting them resulting in inappropriate and hallucinating output. AI hallucination is a problem because it hampers a user’s trust in the AI system, negatively impacts decision making, and may give rise to several ethical and legal problems.’ Athaluri SA, Manthena SV, Kesapragada VSRKM, Yarlagadda V, Dave T, Duddumpudi RTS, ‘Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References’ (2023) 15(4) Cureus e37432, www.cureus.com/articles/148687-exploring-the-boundaries-of-reality-investigating-the-phenomenon-of-artificial-intelligence-hallucination-in-scientific-writing-through-chatgpt-references#!/ accessed 10 April 2026.
7. The literature emphasises that the lack of transparency in AI systems constitutes the so-called ‘black box problem’, characterised by the inability to fully comprehend an AI’s decision-making process and to predict its outcomes. Yavar Bathaee, ‘The Artificial Intelligence Black Box and the Failure of Intent and Causation’ (2018) 2(2) Harvard Journal of Law & Technology.
8. AgRg no HC 828054/RN (STJ, Fifth Panel, 23 April 2024) https://processo.stj.jus.br/processo/julgamento/eletronico/documento/mediado/?documento_tipo=integra&documento_sequencial=242041837®istro_numero=202301896150&peticao_numero=202300906480&publicacao_data=20240429&formato=PDF accessed 10 April 2026.