Criticality of predictors in multiple regression

Br J Math Stat Psychol. 2001 Nov;54(Pt 2):201-25. doi: 10.1348/000711001159483.

Abstract

A new method is proposed for comparing all predictors in a multiple regression model. This method generates a measure of predictor criticality, which is distinct from and has several advantages over traditional indices of predictor importance. Using the bootstrapping (resampling with replacement) procedure, a large number of samples are obtained from a given data set which contains one response variable and p predictors. For each sample, all 2p-1 subset regression models are fitted and the best subset model is selected. Thus, the (multinomial) distribution of the probability that each of the 2p-1 subsets is 'the best' model for the data set is obtained. A predictor's criticality is defined as a function of the probabilities associated with the models that include the predictor. That is, a predictor which is included in a large number of probable models is critical to the identification of the best-fitting regression model and, therefore, to the prediction of the response variable. The procedure can be applied to fixed and random regression models and can use any measure of goodness of fit (e.g., adjusted R2, Cp, AIC) for identifying the best model. Several criticality measures can be defined by using different combinations of the probabilities of the best-fitting models, and asymptotic confidence intervals for each variable's criticality can be derived. The procedure is illustrated with several examples.

MeSH terms

  • Humans
  • Models, Statistical*
  • Psychometrics*
  • Regression Analysis*