Validity and coverage of estimates of relative accuracy

Ann Epidemiol. 2000 May;10(4):251-60. doi: 10.1016/s1047-2797(00)00043-0.


Purpose: Studies comparing test accuracy often restrict the confirmation procedure to subjects classified as positive by either test. Relative sensitivity (RSN) and relative false-positive rate (RFP) are two estimable comparative measures of accuracy. This article evaluates the influence of sample size, disease prevalence, and test accuracy on the validity of point estimates of RSN and RFP, and on the coverage of their confidence intervals (CI).

Methods: For each combination of sample size, disease prevalence, test accuracy, and interdependence between tests 1,000 samples were generated using computer simulations. The percent bias in the RSN and RFP estimates was measured by comparing the means of the 1,000 values computed in each simulation (log-transformed) with their theoretical values. Coverage of the estimated CI was measured by computing the proportion that actually included the theoretical values. Application of these methods was illustrated with data from a study comparing mammography and physical examination in screening for breast cancer.

Results: RSN estimates were valid if the true number of diseased cases exceeded 30, and RFP estimates were valid if the number of nondiseased subjects exceeded 200. When the numbers of diseased and nondiseased subjects exceeded 150 each, the 95% CI of RSN and RFP provided adequate coverage of the parameters (95 +/- 2%).

Conclusion: Sample size is the most important variable for the validity and coverage of RSN and RFP estimates. For small samples, validity and coverage of RSN and RFP also depend on the accuracy of each test and on the degree of interdependence between the tests.

MeSH terms

  • Adult
  • Breast Neoplasms / epidemiology
  • Confidence Intervals
  • Epidemiologic Methods*
  • False Positive Reactions
  • Female
  • Humans
  • Mass Screening
  • Middle Aged
  • Models, Statistical*
  • Predictive Value of Tests
  • Reproducibility of Results
  • Sensitivity and Specificity