Tackling prediction uncertainty in machine learning for healthcare

Nat Biomed Eng. 2023 Jun;7(6):711-718. doi: 10.1038/s41551-022-00988-x. Epub 2022 Dec 29.

Abstract

Predictive machine-learning systems often do not convey the degree of confidence in the correctness of their outputs. To prevent unsafe prediction failures from machine-learning models, the users of the systems should be aware of the general accuracy of the model and understand the degree of confidence in each individual prediction. In this Perspective, we convey the need of prediction-uncertainty metrics in healthcare applications, with a focus on radiology. We outline the sources of prediction uncertainty, discuss how to implement prediction-uncertainty metrics in applications that require zero tolerance to errors and in applications that are error-tolerant, and provide a concise framework for understanding prediction uncertainty in healthcare contexts. For machine-learning-enabled automation to substantially impact healthcare, machine-learning models with zero tolerance for false-positive or false-negative errors must be developed intentionally.

Publication types

  • Review

MeSH terms

  • Machine Learning*
  • Uncertainty