The COVID-19 pandemic sparked a rise in misinformation from various media sources, which contributed to the heightened severity of hate speech. The upsurgence of hate speech online has devastatingly translated to real-life hate crimes, which saw an increase of 32% in 2020 in the United States alone (U.S. Department of Justice 2022). In this paper, I explore the current effects of hate speech and why hate speech should be widely recognized as a public health issue. I also discuss current artificial intelligence (AI) and machine learning (ML) strategies to mitigate hate speech along with the ethical concerns with using these technologies. Future considerations to improve AI/ML are also examined. Through analyzing these two contrasting methodologies (public health versus AI/ML), I argue that these two approaches applied by themselves are not efficient or sustainable. Therefore, I propose a third approach that combines both AI/ML and public health. With this proposed approach, the reactive side of AI/ML and the preventative nature of public health measures are united to develop an effective manner of addressing hate speech.
Keywords: Artificial intelligence; Cyberbullying; Hate crimes; Hate speech; Machine learning; Public health.
© The Author(s), under exclusive licence to Springer Nature Switzerland AG 2023, Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.