Background: A valid and practical measure of comorbid illness burden in dialysis populations is greatly needed to enable unbiased comparisons of clinical outcomes. We compare the discriminatory accuracy of 1 year mortality predictions derived from four comorbidity instruments in a large representative US dialysis population.
Methods: Comorbidity information was collected using the Index of Coexistent Diseases (ICED) in 1779 haemodialysis patients of a national dialysis provider between 1997 and 2000. Comorbidity was also scored according to the Charlson Comorbidity Index (CCI), Wright-Khan and Davies indices. Relationships of instrument scores with 1 year mortality were assessed in separate logistic regression analyses. Discriminatory ability was compared using the area under the receiver-operating characteristics curve (AUC), based on predictions of each regression model.
Results: When mortality was predicted using comorbidity and age, the ICED better discriminated between survivors and those who died (AUC 0.72) as compared with the CCI (0.67), Wright-Khan (0.68) and Davies (0.68) indices. Upon addition of race and serum albumin, predictive accuracy of each model improved further (AUCs of the ICED, 0.77; CCI, 0.75; Wright-Khan Index, 0.75; Davies Index, 0.74).
Conclusions: The ICED had greater discriminatory ability than the CCI, Davies and Wright-Khan indices, when age and a comorbidity index were used alone to predict 1 year mortality; however, the differences among instruments diminished once serum albumin, race and the cause of ESRD were accounted for. None of the currently available comorbidity instruments tested in this study discriminated mortality outcomes particularly well. Assessing comorbidity using the ICED takes significantly more time. Identifying the key prognostic comorbid conditions and weighting these according to outcomes in a dialysis population should increase accuracy and, with restriction to a finite number of items, provide a practical means for widespread comorbidity assessment.