Background: Composite indices of healthcare performance are an aggregation of underlying individual performance measures. They are increasingly being used to rank healthcare organizations. Although composite indicators present the "big picture" in a way that is easy to interpret, misleading conclusions may be drawn if attention is not paid to key methodological issues in their construction.
Objectives: We examine variability in performance measures in the context of the construction and use of composite measures. We illustrate how variability in the underlying data and the resulting composite may undermine the robustness of performance measures in health care. We also illustrate how variation in the methodological rules applied to aggregate the individual indicators can have an important impact on composite scores.
Methods: We use data for 117 English acute hospitals to illustrate the generic methodological issues. The variance in performance measures is partitioned into "controllable" and "uncontrollable" elements. We create a composite index from the underlying performance indicators and use Monte Carlo simulations to examine the robustness of the composite.
Results: Random variation beyond the control of organizations gives rise to considerable uncertainty in hospital scores. Composites are also sensitive to changes made to the weighting system and to the aggregation rules. Some hospitals can jump almost half of the league table as a result of subtle changes.
Conclusions: Great care is warranted in interpreting the results of composite performance measures. Suggestions for their future development are made.