Using deep learning to predict outcomes of legal appeals better than human experts: A study with data from Brazilian federal courts

PLoS One. 2022 Jul 28;17(7):e0272287. doi: 10.1371/journal.pone.0272287. eCollection 2022.

Abstract

Legal scholars have been trying to predict the outcomes of trials for a long time. In recent years, researchers have been harnessing advancements in machine learning to predict the behavior of natural and social processes. At the same time, the Brazilian judiciary faces a challenging number of new cases every year, which generates the need to improve the throughput of the justice system. Based on those premises, we trained three deep learning architectures, ULMFiT, BERT, and Big Bird, on 612,961 Federal Small Claims Courts appeals within the Brazilian 5th Regional Federal Court to predict their outcomes. We compare the predictive performance of the models to the predictions of 22 highly skilled experts. All models outperform human experts, with the best one achieving a Matthews Correlation Coefficient of 0.3688 compared to 0.1253 from the human experts. Our results demonstrate that natural language processing and machine learning techniques provide a promising approach for predicting legal outcomes. We also release the Brazilian Courts Appeal Dataset for the 5th Regional Federal Court (BrCAD-5), containing data from 765,602 appeals to promote further developments in this area.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Brazil
  • Deep Learning*
  • Humans
  • Law Enforcement
  • Natural Language Processing

Grants and funding

This study was funded by the National Council for Scientific and Technological Development (CNPq) through a scholarship to Elias Jacob de Menezes-Neto (302668/2020-9). The Brazilian Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) financed the fee to publish this article (finance code 001). Funders had no role in the study design, data collection and analysis, or the decision to prepare and publish the manuscript.