• ESP
  • How

    How Artificial Intelligence is Transforming Judicial Processes and the Need for New Principles

    The use of Artificial Intelligence (AI) in judicial processes by judges is already a reality. For instance, ChatGPT is used to facilitate the drafting of resolutions, and in other jurisdictions, AIs specially designed to assist judges are being incorporated, such as in Brazil and Argentina. This use of AI in judicial procedures has great potential to speed up the resolution time of cases, allowing judges to use their time more efficiently, whether by supporting the analysis of documentation or drafting. However, this use of AI also implies new challenges and paradigm shifts due to the profound implications it has on the rights of those who use the justice system.

    As Cristian León Coronado aptly states: “The accelerated and intensive use of artificial intelligence (AI) technologies, especially in Latin American countries where there are insufficient safeguards, carries serious risks with multiple impacts on human rights. The application of AI without control, without critical perspectives, nor understanding of its scope, shows the rush to see ‘who gets there first’ in leveraging technology” (1).

    From the perspective of the judiciary, the use of AI in judicial processes, in my view, generates three major issues: i) the need to provide transparency to justice users about which procedural acts were carried out partially or entirely by AI, ii) the crisis of legitimacy that comes with procedural acts being completely managed and resolved by AI, and iii) the challenge to the duty of grounding judicial resolutions when they are produced partially or entirely by AI.

    The need to provide transparency to justice users regarding procedural acts involving AI arises, first of all, from the need to avoid reverting to medieval processes where opacity was the norm, and the parties involved only received a sentence without any understanding of the legal and investigative machinery behind it. Ensuring that individuals subjected to a judicial process can easily understand what happens in said process is one of the pillars for guaranteeing respect for fundamental rights and preventing arbitrary use of power. This duty of transparency has so far meant that the judge had to justify their decisions by grounding their reasoning and interpretations and that the process had to maintain a clear structure to guarantee the exercise of defense. However, the advent of AI is changing this.

    The need for transparency in the use of AI in judicial processes stems from the fact that AI is fundamentally different from human actions. Therefore, for individuals involved in a process to exercise their right to defense optimally, they must be clear about when and how AI intervened in a resolution.

    The second issue is the legitimacy crisis derived from an AI completely carrying out a procedural act, which raises a philosophical facet: What gives an AI the legitimacy to judge a human being? Judges are vested with the authority to judge the actions of others. This investiture is backed by the fact that they are legal professionals, meet the aptitude and knowledge criteria designated by the judiciary, and are neutral individuals who use their social and historical understanding to contextualize the disputes and processes they must resolve. Additionally, the judge is also a person, which means that access to justice is intermediated by another human being, ensuring a minimum of social trust to delegate intervention in one’s own problems.

    When an AI performs procedural acts, the question arises about its social legitimacy to do so. I am not referring to legal legitimacy, as a legal and constitutional reform would suffice for it to have the powers of a judge, but rather to a meta-legal legitimacy, the social legitimacy of this dehumanization of justice. Jurisdiction would no longer depend on another human being. It could be argued that this crisis arises only if the AI handles substantive resolutions and that if it is reserved for mere procedural orders, these legitimacy doubts are avoided while making processes more efficient. To some extent, I agree that the legitimacy crisis is minimized, but it does not disappear. As any litigation practitioner knows, the final outcome of a process largely depends on these mere procedural orders; every disjunction in a process can have a critical impact on the judgment. Moreover, it implies that a stage of the process is out of human control, where AI effectively controls the future of the parties involved.

    The third issue pertains to the challenge of grounding judicial resolutions when they are produced partially or entirely by Artificial Intelligence. The duty to ground judicial resolutions is now a basic standard, but it took centuries of struggles to achieve this requirement, which aims to check the arbitrary use of the power conferred on judges. Arbitrary use of the power to judge can be done by both an Artificial Intelligence and a human being, but it is much more complex to confront such arbitrariness with AI.

    When facing a resolution considered inadequately grounded by a judge, one can appeal by questioning the reasoning used, either at the same instance or at a higher level, and such reasoning will be examined to determine if it meets the principles of sound rational criticism. There is an equality with the judge whose reasoning is questioned since they are a person like the other parties and therefore, we largely understand how they think. However, with an Artificial Intelligence, whose neural networks are increasingly complex, the possibility of questioning the reasoning in a resolution is complicated by a new barrier: understanding how it is programmed, what paths in the neural network were activated, and why.

    The average person already had a significant difficulty understanding the complexities and legal language involved in a process, and now adding the need for understanding programming and computer science can create a judicial process so opaque and confusing that it would make Kafka’s “The Trial” seem like a utopia rather than a warning.

    Given these issues, our current design of processes does not have clear or effective tools to address the inclusion of Artificial Intelligence by the judiciary. In this scenario, I propose the inclusion of a new principle of procedural law to confront the new reality we face as legal operators. The principles of procedural law must permeate all actions and stages of a judicial procedure, being the starting point for how processes should be conducted. Including a new principle would more easily assimilate the changes that will have to be made in the structure of these procedures.

    Why do I consider a principle of procedural law the best way to address the challenges outlined in the previous section? This becomes clear by understanding what these principles are. Sergio Artavia Barrantes and Carlos Picado Vargas develop it exceptionally well: “Procedural principles are those maximum premises or fundamental ideas that serve as the backbone of all institutions of procedural law. They are axiological sources that become formal sources in the absence of a norm – art. 5 LOPJ-. Therefore, through the principles, the fundamental guiding lines are drawn that must be respected for the procedural system to function coherently with human rights and the principle of procedural legality. If what is desired is to regulate the way the process should unfold (due process), understood as a peaceful method of dialectical debate between two antagonists on equal terms before a third party who will heterocompose (sic) the dispute, formulating the necessary principles to achieve this implies nothing less than drawing the fundamental guiding lines that must be indispensably respected to achieve the minimum coherence that any system implies. That is, procedural principles demarcate the indispensable conditions – sine qua non – by which due process is governed.” (3).

    The procedural law principle I propose is called the “Principle of Humanity,” and it aims to address the three issues previously mentioned. The inclusion of this principle would imply considerable changes in how processes are carried out, similar to when the principle of orality was introduced, which involved an adaptation period but significantly improved the efficiency and transparency of judicial acts. At this point, I believe that the evolution of procedural law in the face of technological advances is urgent and critical, as failing to adapt could lead to abuses against justice users, potentially endangering the social fabric in the long term. This urgency to intervene in procedural law is well expressed by Fernando Martín Diz: “Today, the irruption of artificial intelligence in the legal world is unstoppable. And in the specific field of procedural law, perhaps due to its significant repercussions on fundamental rights, there is still much to do and develop for an optimal and guaranteed utilization of the options and possibilities that artificial intelligence offers. We see certain advances (tools for case law research, document and case preparation, and processing) as lights. We perceive gaps, shadows regarding adequate implementation that guarantees full respect for fundamental procedural rights, and there is ample room for speculation and conjecture, especially in determining in the short and medium term whether legal artificial intelligence should be implemented with assistive or auxiliary functions for those tasked with administering justice or whether it could even assume full decision-making functions in the context of judicial or extrajudicial dispute resolution.” (5).

    The Principle of Humanity would encompass three general rules that should apply to any type of judicial procedure in which artificial intelligence is used by the judge or as a replacement for the judge. These rules are: i) It must be explicitly stated if AI was used in a particular act and, if so, how it was used and what its scope was; ii) in cases where an AI performs a judicial act entirely, even if it is a procedural act, the parties can request that a human review what the AI did; and iii) when AI is used in a substantive resolution, the reasoning must include a satisfactory explanation of how the information was processed to reach the conclusion.

    The first rule, requiring an explicit mention of whether AI was used in any aspect of the resolution or order, would provide security and transparency to the parties on how to analyze each resolution. Moreover, I believe that such transparency would help maintain the social legitimacy mentioned. This duty would not be limited to a vague mention of AI use; it should explicitly and concretely state which parts were handled by AI and what information was provided to it.

    The second rule would create an automatic appealability for judicial acts issued by AI. This possibility of appealing such acts would cover cases where normally the act would not have any type of appeal, as often happens with procedural acts. In cases where there is already the possibility of appealing the act, the appeal must be resolved by a human and not by AI. This rule also implies that AI cannot be the last instance in any hierarchical appeal process. In this way, we ensure that there are no spaces where a human does not have effective control over the judicial procedure and also avoid a social legitimacy crisis by granting so much power to AI. It could be argued that increasing the number of appeals that the parties can file would significantly slow down the process, but I believe this would be compensated by the speed that AI will bring to the processes. Moreover, celerity is not an end in itself, and it should not sacrifice the right to defense and the quality of the resolutions issued.

    The third rule, which is the obligation that when AI is used in a substantive resolution, as part of the reasoning, it must satisfactorily explain how the information was processed to reach the conclusion obtained. This, in my opinion, would solve two problems: one practical and one more philosophical. The practical problem is that it would allow the parties to question and analyze the resolutions more deeply. This is because the analysis of a resolution issued by a human would be very different from one issued by AI. In a human-issued resolution, from a litigator’s point of view, one must analyze the normative interpretation made, how the existing evidence was considered, and generally how their considerations were woven with the elements at their disposal. With AI, this same analysis must be done but with an extra layer of complexity: understanding how it will process the information, for example, how it will rank the various pieces of evidence when they are contradictory according to its neural network. Providing this explanation would allow the parties to understand the new complexities they face and thus exercise their right to defense more effectively. Without a clear explanation and understanding of AI processing, any attempt to question the issued resolution would be extremely difficult, leading to a considerable deterioration in the right to access justice.

    This brings me to the philosophical aspect of this last rule: preventing judicial processes from becoming incomprehensible to the people they affect, and additionally preventing our rights as individuals from being, metaphorically speaking, in the hands of Artificial Intelligences that we do not understand and cannot explain. Although AI is a powerful tool, it is not infallible and can easily replicate social and racial biases, reinforce economic barriers, and amplify all the current societal issues. The lack of an explanation behind the AI’s information processing that produced the resolution would significantly impede detecting how it might be reinforcing the worst parts of the society that created it, all under the veil of a false sense of objectivity, making it even more difficult to critically examine the acts issued.

    These three rules that make up the procedural law principle of Humanity certainly require deeper conceptualization and, more importantly, careful implementation. However, I believe they offer a solid initial proposal to address the dynamics that AI will generate in judicial processes from a litigator’s perspective. My design for this principle is based on these three rules having synergy with each other and being simple and therefore flexible to cover as many scenarios as possible.

    Procedural law is just one of countless areas experiencing a radical change due to AI, but it is a particularly sensitive area since the rights of all individuals depend on these judicial processes. The misuse of this technology would cause catastrophic damage and enable systemic violations of fundamental rights on scales we do not fully understand. For this reason, I believe there is a real urgency to modify our vision of judicial processes to ensure that the use of this technology will not lead to a significant regression in the right to defense and judicial transparency. Our interest in procedural speed and efficiency must not blind us to the existing risks, which must be addressed and resolved.


    Bibliography:

    1. León Coronado, C. (2023). “La carrera por la regulación de la inteligencia artificial.” Revista Latinoamericana de Economía y Sociedad Digital. Bolivia.
    2. Artavia Barrantes, S., y Picado Vargas, C. (2016). “Curso de Derecho Procesal Masterlex: Principios Procesales.” Punto Jurídico. Costa Rica.
    3. Martín Diz, F. Curso Superior en Derecho Inteligencia Artificial: Modulo Inteligencia Artificial y Derechos Fundamentales Procesales. DoinGlobal. España.

    POST RECIENTES

    Technological Disruption in the Capital Markets

    In the last decade, technology has revolutionized the capital markets, radically transforming the way transactions are conducted, risks are managed, and financial information is accessed. From the emergence of trading platforms and the use of artificial intelligence...

    The Rise of Green Bonds: Drivers of a Sustainable Future

    Green bonds are debt securities issued by government, supranational, or corporate entities, aimed at funding environmentally beneficial projects and activities. These instruments provide a dual benefit: they allow investors to support eco-friendly initiatives while...

    CATEGORÍAS
    TAGS