In a recent publication by Jus Mundi and Stanford CodeX, it was explored how artificial intelligence (AI) is already being applied in international arbitration, whether through the analysis of large datasets, the identification of patterns in decisions, or direct support in the management of complex cases.

According to the study, AI has the potential to profoundly transform arbitral practice — from precedent research to the drafting of procedural timetables — and the expectation is that its implementation will make proceedings faster, less costly, and potentially more transparent.

But is this vision really that simple?

The authors warn of significant risks: automated analyses may be superficial, databases remain fragmented, and there are inevitable tensions between efficiency and the core pillars of arbitration, such as confidentiality, ethics, and the arbitrator’s authority. Moreover, the use of AI requires rigorous security and compliance protocols, as arbitration often involves highly sensitive information.

In this context, it is important to emphasize that the use of AI does not eliminate — nor should it eliminate — the role of arbitrators and lawyers. On the contrary, despite AI’s many advantages, it is up to these professionals to critically assess its outputs, adapting them to the specificities of the case, the ethical principles of their profession, and the strategic nuances that only human judgment can capture.

The question that remains is: how far should we open the door to AI? Are we ready to entrust algorithms with analyses that require not only technical expertise, but also sensitivity and human judgment? Perhaps the challenge is not to reject technology, but to integrate it critically and responsibly, with clear standards of quality and ethics that preserve the essence of arbitral justice.

Access the White Paper: https://dailyjus.com/reports/2025/08/stanford-codex-x-jus-mundi-whitepaper