First principles when reviewing an AI-assisted judicial ruling in Mexico

Tuesday 14 April 2026

Galo Marquez
Tec de Monterrey; Creel, Garcia-Cuellar, Aize y Enriquez

Marco Clavel
Universidad Iberoamericana; WhiteBox Project on AI Ethics & Governance

AI is breaking into courts around the world, and Mexico is no exception. On August 22, 2025, a Collegiate Circuit Court resolved a complaint in which, without a request from either party, it used language models such as ChatGPT, Grok, and Gemini to set the amount for a procedural financial security. The decision gave rise to the first thesis in our country establishing guidelines for the use of AI in the judicial function. Although the effort is undeniably pioneering, it also opens an urgent debate: what does it mean for Mexico's judiciary to begin self-regulating its interaction with algorithmic systems without a prior legal framework, without clear technical standards, and without robust methodological scrutiny?

This article examines that ruling to understand its implications on three levels: (1) the procedural context that gave rise to the ruling; (2) the soundness – or fragility – of the normative analysis on AI conducted by the Court; and (3) the technical and legal appropriateness of how AI was applied to the specific case. Our intention is to provide a balanced reflection: both authors have experience in the regulatory study of AI, but we reach divergent conclusions on the application of the principles that a Mexican judge needs to undertake without institutional parameters.

The complaint that gave rise to the first AI ruling

The appeal seemed relatively straightforward, even if it had not been resolved through AI. It involved an amparo lawsuit filed against a procedural omission within an usucapión (adverse possession) proceeding in the State of Mexico. The complainant alleged that it should have been called to participate in the proceedings because the properties in dispute formed part of its estate. To register the lawsuit with the Registry Function Institute, the trial judge required to post a bond whose amount was neither legally grounded nor reasoned. The other party challenged this omission, and the matter came before the Collegiate Circuit Tribunal solely to determine the appropriate amount of the bond. Up to that point, nothing novel in procedural or substantive terms.

Subsequently, the Magistrate of the Tribunal decided, without consulting the parties, to use AI to determine the amount. He did so on the premise that judges may use technological tools to improve the accuracy of their decisions. His methodology consisted of three steps:

1.    Conducting a brief comparative law exercise on the international regulation of AI.
2.    Establishing a technical procedure based on large language models (LLMs).
3.    Requesting that ChatGPT, Grok, and Gemini calculate the bond amount following a formula based on Supreme Court precedents.

The three systems produced different figures: MXN $60,081; $64,655.34; and $59,864.98. Rather than examining why they differed or verifying whether they had relied on official sources, the Magistrate averaged the figures and set the bond at $60,000. While the decision was transparent it was also technically inconsistent.

This ruling gave rise to a thesis (a non-binding precedent) establishing guiding principles for the judicial use of AI: proportionality, transparency, data protection, applicability, and human oversight. This case constitutes Mexico's first precedent on artificial intelligence in the judiciary.

Can a judge self-regulate the use of AI?

The first fundamental question is not technical but one of public policy: is it the judge's role – rather than the Judiciary as a whole, or the legislature – to issue rules on the use of AI in Mexico?

Mexico, a civil law jurisdiction, currently lacks a general law on artificial intelligence. There is no regulation on the use of AI in jurisdictional functions, nor a minimum standard of professional practice binding judges and lawyers regarding algorithmic tools. Judicial self-regulation, as occurred in this case, arises in a normative vacuum.

Those who defend the decision argue that the Judiciary may issue internal rules through general agreements of the Federal Judiciary Council (CJF), based on Articles 100 of the Federal Constitution and 81–82 of the Organic Law of the Federal Judiciary. If the use of AI is an administrative-jurisdictional support tool, the CJF could regulate it, and a court interpreting it could develop a precedent. From this perspective, allowing the Tribunal to establish criteria does not invade legislative powers but rather fills a gap to resolve a specific case.

However, this position has limits. The principle of iura novit curia authorizes judges to apply the law, even if the parties do not invoke it, but it does not grant powers to create de facto a regulatory regime for algorithmic systems. The Tribunal's thesis is not limited to the specific case since it formulates general guidelines on AI that go beyond the strictly procedural scope.

The question, then, is whether this type of self-regulation could compromise the institutional legitimacy of the Judiciary by legislating through the facts.

The Tribunal’s normative analysis

In its ruling, the Tribunal grounded the validity of AI use in international instruments such as the UNESCO Recommendation on the Ethics of Artificial Intelligence and the Ethical Guidelines of the European Commission. Although citing these documents is a valuable effort, their use may present two problems.

First, these are non-binding instruments designed for institutional contexts very different from Mexico's. The ruling mentions the principles but does not explain how they translate to the Mexican judicial process or how they restrict or enable the concrete uses of AI.

Second, they do not recognize the Latin American institutional ecosystem and regional efforts in this area. National regulators in Mexico (on data protection and telecommunications), or the OECD for Latin America and the OAS, have issued specific guidelines on algorithmic governance, data protection, transparency, and automated systems in public function. There are also specialized reports, such as The Strategic and Responsible Use of AI in the Public Sector in Latin America and the Caribbean, that offer relevant frameworks for Latin America.

In other words, the Tribunal resorted to international sources but left out those generated internally in the country and regional criteria that may be more relevant to the reality of the Mexican system.

How AI was applied to the specific case

The central question is whether, beyond the normative framework, the use of AI in the specific case was appropriate. Here, both authors maintain a divergence.

For one of us, the use of AI was justifiable: the quantification of the bond could benefit from tools that systematize financial, jurisprudential, or statistical information. Nothing prevents a judge from using an advanced calculator, a spreadsheet, or a language model as technical assistance, as long as they maintain control of the decision.

For the other author, the problem is not that AI was used, but how it was used. Setting a bond involves elementary arithmetic calculations. Delegating that function to a language model may lack technical justification.

Even more concerning is that the three models produced different figures despite the Magistrate indicating that they should use official data from the Bank of Mexico. That inconsistency should have raised methodological alarms. Unfortunately, the ruling does not explain:

•    what variables the systems used, 
•    why their results differ, 
•    whether any used inaccurate or outdated information, 
•    whether the legal formula was correctly interpreted by each model.

The decision to average the results, and additionally round down, has no legal or statistical basis. An adjudicatory function that requires rational justification ended up depending on an improvised calculation. Mexico's first judicial AI thesis cannot be built on such fragile methodological foundations.

Conclusion

The use of AI in Mexico's judiciary is inevitable. No modern judicial system can ignore the reality that algorithmic tools allow processing large volumes of information, identifying patterns, accelerating workloads, and, if used correctly, making justice more accessible. But technological enthusiasm is no substitute for an appropriate legal reasoning.

Mexico's judiciary must assume institutional, not individual, leadership to regulate and supervise the use of AI in judicial proceedings. That task should not fall on a single judge, nor on an isolated thesis, but on a collective effort that combines technique, comparative law, data science, and constitutional guarantees.