TMT focus
The COVID-19 pandemic has had a substantial effect on the use of technology in arbitration. Prior to the pandemic, hearings tended to be conducted in person at the seat of arbitration or another physical venue and frequently involved the production and transportation of voluminous hard-copy bundles of documents. Now, many hearings are conducted through video-conferencing administrated by specialized transcription companies. Indeed, a 2021 Queen Mary survey found that there has been a dramatic increase in the use of virtual hearing rooms, with 72% of respondents indicating their use sometimes, frequently, or always. This is a stark contrast to the results of the 2018 survey, when 64% of respondents reported that they had “never” used virtual hearing rooms.
However, this rapidly evolving use of technology in arbitration has not been reflected to the same extent when it comes to the use of artificial intelligence (AI). According to the 2021 Queen Mary survey, 35% or respondents indicated that they have never used AI and just 15% of respondents indicated that they frequently or always use it. Among those respondents who do use AI, it was reported that the use of AI in arbitration is almost exclusively limited to technology-assisted document review.
This article explores other areas in which AI might be used in arbitration – namely, as a decision-maker or as a tool in the decision-making process – and why its use may not be growing as quickly as the use of other forms of technology in arbitration.
Discussions on the use of AI in arbitration inevitably lead to the question of whether AI will replace arbitrators. A 2013 report on the susceptibility of jobs to computerization suggested that in the future judges and magistrates were exposed to a 40.1% risk of having their jobs automated. And a 2017 report by McKinsey suggested that 23% of legal work was already being automated. Today, AI adjudicators are being used in smart contracts and blockchain. In international commercial arbitration, however, AI arbitrators have yet to feature. This is not surprising. Some arbitration laws expressly provide that arbitrators must be natural persons. For example, the Uniform Arbitration Act (UAA) enacted by (OHADA) provides that “only a natural person may be an arbitrator”.
Moreover, appointment of an AI arbitrator might still be a step too far for many parties in circumstances where party agreement to the composition of an arbitral tribunal is fundamental. An award may be unenforceable or annulled under the Convention on the Recognition and Enforcement of Foreign Arbitral Awards (New York Convention) or the Washington Convention on the Settlement of Investment Disputes between States and Nationals of Other States (ICSID Convention) if the composition of the tribunal is not agreed by the parties. Depending on the precise wording of the arbitration clause or treaty, as the case may be, it remains an open question whether parties may be deemed to have consented to the appointment of an AI arbitrator or arbitral tribunal.
...is it just a matter of time before an AI arbitrator becomes a reality?
Furthermore, issues of public policy remain to be considered, even if party agreement could be established. A reasoned decision is one of the fundamentals of arbitration. In its current form, AI does not have the capacity to provide adequate reasoning in a manner that emulates the human thought process. Reasoning and thinking include consciousness, sentience, discernment, judgment, empathy, and intuition, all of which AI systems lack. An arbitral award typically must be substantiated by an explanation of the reasoning and the process by which arbitrators arrived at that final decision. In contrast, AI generated results are achieved through probabilistic inferences made from the available data. Accordingly, an AI program generally cannot provide the reasons for the conclusions that are reached and thus cannot provide a reasoned award that offers the losing party an opportunity to understand why it lost and identify factors that led to a certain outcome and parties (be they parties to the dispute or third parties (if published)) in similar situations to adapt their behaviour in the future.
It is arguable, therefore, that to avoid the risk of non-enforceability, the permissibility of AI arbitrator appointments should be clarified.
If there are inherent difficulties in AI constituting the decision-maker, can AI still be used to assist tribunals with their decision-making functions?
Despite the inherent limitations of AI noted earlier, tribunals might use AI programs and other analytical algorithms to test the validity and soundness of their decisions. This approach was implemented in a judicial context in the United States in Loomis v Wisconsin 881 N.W.2d 749 (Wis. 2016), where the court used an AI-based risk assessment tool to support its decision to deny parole. In this case, a risk assessment software “COMPAS” was used to determine risk of “re-offending”. This AI software works by using a proprietary algorithm that takes gender and race into account. A challenge to the use of the AI program was unsuccessful. The court found that other factors, including the defendant’s criminal record, were relied on as well such that while the AI program was used as a tool in the decision making process its outcome was not wholly determinative of the outcome.
However, this use of AI may not be as feasible in the context of international commercial arbitration. Data-driven AI programs require access to large quantities of data. The larger the sample data, the more accurate the program’s value. Accordingly, areas of law with substantial amount of public decisions are more suitable for AI. The broad spectrum of legal and factual issues at play in the international commercial arbitration context and the confidentiality of many arbitration awards will likely make the set of data available for AI analysis inadequate in many (if not most) instances because it will limit the amount of data that can be used by an AI software program.
While the ability of AI to function as an arbitrator is likely to be technically possible in time, the legal framework will require adaptation to accommodate such a revolutionary step. AI will be more likely to ‘augment’ decision-making in international commercial arbitration than replace it entirely.
For further information, please contact Thomas R. Snider or John Gaffney.
Published in June 2022