Objective: We review the uses of electronic health care data algorithms, measures of their accuracy, and reasons for prioritizing one measure of accuracy over another.
Study design and setting: We use real studies to illustrate the variety of uses of automated health care data in epidemiologic and health services research. Hypothetical examples show the impact of different types of misclassification when algorithms are used to ascertain exposure and outcome.
Results: High algorithm sensitivity is important for reducing the costs and burdens associated with the use of a more accurate measurement tool, for enhancing study inclusiveness, and for ascertaining common exposures. High specificity is important for classifying outcomes. High positive predictive value is important for identifying a cohort of persons with a condition of interest but that need not be representative of or include everyone with that condition. Finally, a high negative predictive value is important for reducing the likelihood that study subjects have an exclusionary condition.
Conclusion: Epidemiologists must often prioritize one measure of accuracy over another when generating an algorithm for use in their study. We recommend researchers publish all tested algorithms-including those without acceptable accuracy levels-to help future studies refine and apply algorithms that are well suited to their objectives.
Copyright © 2012 Elsevier Inc. All rights reserved.