Merging public health and automated approaches to address online hate speech

AI Ethics. 2023 Apr 12:1-10. doi: 10.1007/s43681-023-00281-w. Online ahead of print.

Abstract

The COVID-19 pandemic sparked a rise in misinformation from various media sources, which contributed to the heightened severity of hate speech. The upsurgence of hate speech online has devastatingly translated to real-life hate crimes, which saw an increase of 32% in 2020 in the United States alone (U.S. Department of Justice 2022). In this paper, I explore the current effects of hate speech and why hate speech should be widely recognized as a public health issue. I also discuss current artificial intelligence (AI) and machine learning (ML) strategies to mitigate hate speech along with the ethical concerns with using these technologies. Future considerations to improve AI/ML are also examined. Through analyzing these two contrasting methodologies (public health versus AI/ML), I argue that these two approaches applied by themselves are not efficient or sustainable. Therefore, I propose a third approach that combines both AI/ML and public health. With this proposed approach, the reactive side of AI/ML and the preventative nature of public health measures are united to develop an effective manner of addressing hate speech.

Keywords: Artificial intelligence; Cyberbullying; Hate crimes; Hate speech; Machine learning; Public health.