AI as a tool to facilitate the presentation of evidence in proceedings?
Tuesday 14 April 2026
Dr Christoph von Burgsdorff
Luther, Hamburg
christoph.von.burgsdorff@luther-lawfirm.com
AI is no longer a new concept in the world of justice. Many law firms already have internal AI tools, and large legal databases now offer their own AI tools. The ministries of justice in Germany are also pushing ahead with the digitisation of civil proceedings, as demonstrated by a resolution of the Conference of Ministers of Justice in June 2025 and the adoption of a ‘Joint Declaration on the Use of Artificial Intelligence in the Justice System’.1
AI applications can contribute to the effective and rapid processing of legal cases in the context of legal practice.2 For example, AI tools can be used as analysis and document management systems. Modern AI systems can analyse large quantities of documents in a very short time and extract relevant information.
The use of AI in document analysis appears to be unproblematic in principle. However, all AI tools have their respective risks and limitations in the context of legal practice. Not all AI tools are suitable for each individual case. This is also illustrated by the following case example which is based on the use of one of the currently available AI tools.
AI to support evidence presentation – case example
One current case concerns a claim relating to several thousand damaged items. The client has documented, with photographs, the damage caused to each item in thousands of individual ‘damage reports’.
The question arises as to how evidence of such a large number of damaged items should be presented in legal proceedings. In terms of the burden of proof according to German law, the claimant would in principle have to prove precisely every instance of damage to each individual item. In return, however, the court would also have to examine the relevant evidence in detail in order to be able to pass a judicial judgement. In view of the sheer volume of damaged items, this proves to be extremely time-consuming and inefficient in practice, making the use of AI an obvious choice.
The damage reports can be uploaded to the AI platform internally, ie without any data protection concerns. Questions can be asked of the AI with regard to the uploaded documents. In principle, AI is able to scan all files and present the information in a collated form. However, it is noticeable that not all the information contained in the documents can always be filtered out. For example, each document lists the damaged items with their identification numbers (in supposedly identical formatting). However, if AI is asked, for example, for the total sum of all items listed in the damage reports with the identification number, the AI can only filter out this information for a small number of items, even though the information is actually contained in every document. Incidentally, in addition to the list of damaged items with their identification numbers, each document also contains the total number of items recorded in the damage report.
AI also reaches its limits when it comes to image analysis. The damage reports contain all images that visually document the damage to the individual items. However, according to AI, the images are only available as graphics embedded in PDF files. As a text extractor, AI can only recognise text modules and not image content in such a ‘mixed’ PDF file consisting of text and images. AI can only analyse the damage or categorise it based on a separate table which lists all damaged items again with an additional description of the type of damage.
German statutory framework
In view of the increasing relevance of AI for legal practice, the German Federal Bar Association (‘BRAK’) published a corresponding AI guide with information on the use of artificial intelligence in December 2024.3 In these guidelines, the BRAK emphasises the need for lawyers themselves to carefully review the results generated by AI in order to avoid errors and the resulting liability consequences.
Section 43 sentence 1 of the German Federal Lawyers' Act (‘BRAO’) standardises the obligation of a German Rechtsanwalt (lawyer) to exercise their profession conscientiously. Particularly relevant here is the principle of highly personal service provision, which states that a lawyer must perform their work independently and, in case of doubt, personally in accordance with Section 613 of the German Civil Code (‘BGB’). Consequently, the use of AI systems may not replace the work of a lawyer, but may only support it. It is undisputed that independent review and final control of the AI results by the lawyer is necessary.
The specific duties of care increase in their requirements when dealing with AI with the degree of automation and the intended use. For example, a higher standard of care applies if AI tools are used in relation to clients (eg in automated communication with clients, auto-responders, use of chatbots for client intake, etc) compared to supporting internal workflows.
Section 43a (2) BRAO standardises the prohibition of disclosure of confidential information. Consequently, confidential client information must also be kept secret when using AI tools. It may only be disclosed to providers of AI tools under the strict conditions set out in Section 43e BRAO.
Section 43e BRAO serves as a special provision for maintaining attorney-client privilege when using services. Lawyers may only grant IT service providers, and thus also providers of AI solutions, access to confidential information if this is necessary for the use of the service. In the case of cloud services, it must be carefully examined whether providers absolutely need access to client information (‘need-to-know principle’). Thus, the necessity of transmitting confidential client information is rejected, as use is also possible without the transmission of this data (and disclosure would entail incalculable risks for clients). According to Section 43e (2) and (3) BRAO, the lawyer is obliged to conclude a contract with the service provider, after careful selection, in written form, which must contain the minimum content specified in Section 43e (3) Nos. 1 to 3 BRAO. This includes an obligation of confidentiality with instruction on the criminal consequences of a breach of duty and an obligation to adhere to the principle of purpose limitation in the acquisition of knowledge. If this is not (or no longer) guaranteed, the cooperation must be terminated immediately in accordance with Section 43e (2) sentence 2 BRAO.
Implications for the courts
With the increasing use of AI in legal practice, the question arises for the courts as to the extent to which judges can trust an AI mechanism in their evaluations.
In recent years, various automated programmes have already been tested and used in numerous German courts, primarily to provide support in mass proceedings.4 Examples include the OberLandesGerichts-Assistent (‘OLGA’), the Massenverfahrens-Assistenz durch Künstliche Intelligenz (‘MAKI’) and Codefy.5
When using such AI tools, constitutionally established frameworks in particular act as limits, ie the preservation of judicial independence enshrined in the German Constitution (Grundgesetz, GG) under Articles 97 and 92 GG, the right to a lawful judge under Article 101(1) sentence 2 GG, and the right to a fair hearing under Article 103 (1) GG.6
In view of the difficulties that arise when assessing the admissibility of AI tools, it seems sensible from the perspective of legal certainty for the legislator to include specific provisions on permissible areas of application in the Code of Civil Procedure, taking into account fundamental procedural rights.7
It would be fitting to apply the principles of expert evidence (‘Sachverständigengutachten’).8 The judge must be able to understand how the AI mechanism works (in order to rely on its results without reviewing the documents themselves). The main problem is that both the functioning and the results of the relevant systems are not comprehensible to users. The computer works like a ‘black box’ that is fed with data and ultimately produces a result without it being possible to see what happens in between. If its process cannot be comprehensively understood from the outside (‘black box AI’), but its use is nevertheless necessary, the expert's duties are limited to the design, control and comprehensible presentation of the process and its prerequisites and consequences, so that the court can fulfil its task of evaluating and classifying the expert opinion and, in this respect, assess the evidence.9 Conversely, the use of AI below this threshold is generally permissible, but may then require disclosure and transparency.10
It should also be in line with general quality standards to check the results obtained with the help of AI systems for their accuracy or at least plausibility. Here, too, no universally applicable standards have yet been developed. But insofar as it is part of good scientific practice to explain how results are obtained and thus make them comprehensible, this should also apply to the use of AI – or even more so, given the risk of ‘hallucination’, in large language models.11
Conclusion
Despite the existing challenges, AI offers considerable potential for processing tasks more quickly and effectively in the context of legal practice. Nevertheless, further developments are still needed, and professionals are always obliged to review and critically question the results produced by AI. In view of recent developments, it is to be expected that the initial problems with AI mentioned above will be resolved in the future and that risks will be further minimised as developments in the field of artificial intelligence progress.
Notes
1. Francken/Wörl, Artificial Intelligence in Labour Court Proceedings of the Future, NZA 2026, 1 (1).
2. Kania, The Use of AI in Legal Practice from the Perspective of the Judiciary, Dt. Anwaltsblatt, 27 November 2024.
3. https://www.brak.de/fileadmin/service/publikationen/Handlungshinweise/BRAK_Leitfaden_mit_Hinweisen_zum_KI-Einsatz_Stand_12_2024.pdf.
4. Vanetta/Vogt, Artificial intelligence in civil jurisdiction – perspectives and challenges, Der Betrieb 2025 (issue 35), 2148 (2148).
5. Mielke, Artificial Intelligence in the Judiciary – an Update, Legal-Tech.de magazine, 11 March 2025.
6. Vanetta/Vogt, Artificial Intelligence in Civil Justice – Perspectives and Challenges, Der Betrieb 2025 (Issue 35), 2148 (2149); cf. Maddaloni, Argument Mining in Civil Proceedings: Technical Potential and Regulatory Action Required for Artificial Intelligence in the Judiciary, LTZ 2025, 309 (310).
7. Vanetta/Vogt, Artificial Intelligence in Civil Justice – Perspectives and Challenges, Der Betrieb 2025 (Issue 35), 2148 (2151).
8. See Huber/Giesecke, in: Ebers/Heinze/Krügel/Steinrötter, Artificial Intelligence and Robotics, 1st ed. 2020, § 19 AI in civil proceedings, para. 45.
9. Adelberger/Franke RDi 2025, 557 (561 ff.).
10. Thönissen/Scheuch, in: Vorwerk/Wolf, BeckOK ZPO, 59th ed. 01.12.2025, § 407a margin note 13b.
11. Schaub, Use of Artificial Intelligence in the Preparation of Expert Reports, DS 2025, 38 (41).