Recent changes in individual units are often of interest when monitoring and assessing the performance of healthcare providers. We consider three high profile examples: (a) annual teenage pregnancy rates in English local authorities, (b) quarterly rates of the hospital-acquired infection Clostridium difficile in National Health Service (NHS) Trusts and (c) annual mortality rates following heart surgery in New York State hospitals. Increasingly, government targets call for continual improvements, in each individual provider as well as overall.Owing to the well-known statistical phenomenon of regression-to-the-mean, observed changes between just two measurements are potentially misleading. This problem has received much attention in other areas, but there is a need for guidelines within performance monitoring.In this paper we show theoretically and with worked examples that a simple random effects predictive distribution can be used to 'correct' for the potentially undesirable consequences of regression-to-the-mean on a test for individual change. We discuss connections to the literature in other fields, and build upon this, in particular by examining the effect of the correction on the power to detect genuine changes. It is demonstrated that a gain in average power can be expected, but that this gain is only very slight if the providers are very different from one another, for example due to poor risk adjustment. Further, the power of the corrected test depends on the provider's baseline rate and, although large gains can be expected for some providers, this is at the cost of some power to detect real changes in others.
(c) 2009 John Wiley & Sons, Ltd.