Background: Postmarketing surveillance is routinely conducted to monitor performance of pharmaceuticals and testing devices in the marketplace. However, these surveillance methods are often done retrospectively and, as a result, are not designed to detect issues with performance in real-time.
Methods and findings: Using HIV antibody screening test data from New York City STD clinics, we developed a formal, statistical method of prospectively detecting temporal clusters of poor performance of a screening test. From 2005 to 2008, New York City, as well as other states, observed unexpectedly high false-positive (FP) rates in an oral fluid-based rapid test used for screening HIV. We attempted to formally assess whether the performance of this HIV screening test statistically deviated from both local expectation and the manufacturer's claim for the test. Results indicate that there were two significant temporal clusters in the FP rate of the oral HIV test, both of which exceeded the manufacturer's upper limit of the 95% CI for the product. Furthermore, the FP rate of the test varied significantly by both STD clinic and test lot, though not by test operator.
Conclusions: Continuous monitoring of surveillance data has the benefit of providing information regarding test performance, and if conducted in real-time, it can enable programs to examine reasons for poor test performance in close proximity to the occurrence. Techniques used in this study could be a valuable addition for postmarketing surveillance of test performance and may become particularly important with the increase in rapid testing methods.