Deriving the expected utility of a predictive model when the utilities are uncertain

AMIA Annu Symp Proc. 2005;2005:161-5.


Predictive models are often constructed from clinical databases with the goal of eventually helping make better clinical decisions. Evaluating models using decision theory is therefore natural. When constructing a model using statistical and machine learning methods, however, we are often uncertain about precisely how the model will be used. Thus, decision-independent measures of classification performance, such as the area under an ROC curve, are popular. As a complementary method of evaluation, we investigate techniques for deriving the expected utility of a model under uncertainty about the model's utilities. We demonstrate an example of the application of this approach to the evaluation of two models that diagnose coronary artery disease.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Artificial Intelligence
  • Bayes Theorem
  • Decision Making
  • Decision Support Techniques*
  • Decision Trees
  • Evaluation Studies as Topic
  • Models, Statistical*
  • ROC Curve