Background: The importance and utility of routine externally reported assessments of the quality of health care delivered in managed care organizations and hospitals have become widely accepted. Because externally reported measures of quality are intended to inform or lead to action, proposers of such measures have a responsibility to ensure that the results of the measures are meaningful, scientifically sound, and interpretable.
Criteria for selecting meaningful assessment areas: In choosing clinical performance measures to distinguish among health plans, the condition should have a significant impact on morbidity and/or mortality; the link between the measured processes and outcomes of care should have been established empirically; quality in this area should be variable or substandard currently; and health plans and/or providers should be able to take clinically sensible actions to enhance performance on the measure.
Criteria for assessing scientific soundness: Scientific soundness--the likelihood that a clinical performance measure will produce consistent and credible results when implemented--involves precision of specifications, adaptability, and adequacy of risk adjustment. INTERPRETABILITY OF RESULTS: Interpretability is affected by the content of the measure and the audience. Measures that are clinically detailed and specific may be presented more generally to a consumer audience and in full detail to a clinical audience, but measures that are general by nature cannot be made more clinically detailed. Interpretability entails statistical analysis, calibration of measures, modeling, and presentation of information.
Conclusions: Increased standardization of both the expectations for public release on measures of quality and the criteria by which such measures will be evaluated should contribute to improvements in the larger field of quality assessment.