Risk prediction models have been widely applied for the prediction of long-term incidence of disease. Several parameters have been identified and estimators developed to quantify the predictive ability of models and to compare new models with traditional models. These estimators have not generally accounted for censoring in the survival data normally available for fitting the models. This paper remedies that problem. The primary parameters considered are net reclassification improvement (NRI) and integrated discrimination improvement (IDI). We have previously similarly considered a primary measure of concordance, area under the ROC curve (AUC), also called the c-statistic. We also include here consideration of population attributable risk (PAR) and ratio of predicted risk in the top quintile of risk to that in the bottom quintile. We evaluated estimators of these various parameters both with simulation studies and also as applied to a prospective study of coronary heart disease (CHD). Our simulation studies showed that in general our estimators had little bias, and less bias and smaller variances than the traditional estimators. We have applied our methods to assessing improvement in risk prediction for each traditional CHD risk factor compared to a model without that factor. These traditional risk factors are considered valuable, yet when adding any of them to a risk prediction model that has omitted the one factor, the improvement is generally small for any of the parameters. This experience should prepare us to not expect large values of the risk prediction improvement evaluation parameters for any new risk factor to be discovered.
Copyright © 2010 John Wiley & Sons, Ltd.