Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead
- PMID: 35603010
- PMCID: PMC9122117
- DOI: 10.1038/s42256-019-0048-x
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead
Abstract
Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward - it is to design models that are inherently interpretable. This manuscript clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications where interpretable models could potentially replace black box models in criminal justice, healthcare, and computer vision.
Figures
Similar articles
-
Opening the Black Box: The Promise and Limitations of Explainable Machine Learning in Cardiology.Can J Cardiol. 2022 Feb;38(2):204-213. doi: 10.1016/j.cjca.2021.09.004. Epub 2021 Sep 14. Can J Cardiol. 2022. PMID: 34534619 Review.
-
Explainable, trustworthy, and ethical machine learning for healthcare: A survey.Comput Biol Med. 2022 Oct;149:106043. doi: 10.1016/j.compbiomed.2022.106043. Epub 2022 Sep 7. Comput Biol Med. 2022. PMID: 36115302 Review.
-
Interpretable machine learning models for hospital readmission prediction: a two-step extracted regression tree approach.BMC Med Inform Decis Mak. 2023 Jun 5;23(1):104. doi: 10.1186/s12911-023-02193-5. BMC Med Inform Decis Mak. 2023. PMID: 37277767 Free PMC article.
-
Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction.J Imaging. 2020 May 28;6(6):37. doi: 10.3390/jimaging6060037. J Imaging. 2020. PMID: 34460583 Free PMC article.
-
Open your black box classifier.Healthc Technol Lett. 2023 Aug 29;11(4):210-212. doi: 10.1049/htl2.12050. eCollection 2024 Aug. Healthc Technol Lett. 2023. PMID: 39100500 Free PMC article.
Cited by
-
Early Detection of Sepsis With Machine Learning Techniques: A Brief Clinical Perspective.Front Med (Lausanne). 2021 Feb 12;8:617486. doi: 10.3389/fmed.2021.617486. eCollection 2021. Front Med (Lausanne). 2021. PMID: 33644097 Free PMC article.
-
Machine learning and artificial intelligence in physiologically based pharmacokinetic modeling.Toxicol Sci. 2023 Jan 31;191(1):1-14. doi: 10.1093/toxsci/kfac101. Toxicol Sci. 2023. PMID: 36156156 Free PMC article.
-
Forecasting Key Retail Performance Indicators Using Interpretable Regression.Sensors (Basel). 2021 Mar 8;21(5):1874. doi: 10.3390/s21051874. Sensors (Basel). 2021. PMID: 33800166 Free PMC article.
-
Transfer learning guided discovery of efficient perovskite oxide for alkaline water oxidation.Nat Commun. 2024 Jul 26;15(1):6301. doi: 10.1038/s41467-024-50605-5. Nat Commun. 2024. PMID: 39060252 Free PMC article.
-
Enhancing Autonomous Vehicle Decision-Making at Intersections in Mixed-Autonomy Traffic: A Comparative Study Using an Explainable Classifier.Sensors (Basel). 2024 Jun 14;24(12):3859. doi: 10.3390/s24123859. Sensors (Basel). 2024. PMID: 38931644 Free PMC article.
References
-
- Wexler R When a Computer Program Keeps You in Jail: How Computers are Harming Criminal Justice. New York Times. 2017. June 13;.
-
- McGough M How bad is Sacramento’s air, exactly? Google results appear at odds with reality, some say. Sacramento Bee. 2018. August 7;.
-
- Varshney KR, Alemzadeh H. On the safety of machine learning: Cyber-physical systems, decision sciences, and data products. Big Data. 2016. 10;5. - PubMed
-
- Freitas AA. Comprehensible classification models: a position paper. ACM SIGKDD Explorations Newsletter. 2014. Mar;15(1):1–10.
-
- Kodratoff Y. The comprehensibility manifesto. KDD Nugget Newsletter. 1994;94(9).
Grants and funding
LinkOut - more resources
Full Text Sources
Other Literature Sources
Research Materials