Objective: The aim of this study was to determine the effect that the computer interpretation (CI) of electrocardiograms (EKGs) has on the accuracy of resident (noncardiologist) physicians reading EKGs.
Design: A randomized, controlled trial was conducted in a laboratory setting from February through June 2001, using a two-period crossover design with matched pairs of subjects randomly assigned to sequencing groups.
Measurements: Subjects' interpretive accuracy of discrete, cardiologist-determined EKG findings were measured as judged by a board-certified internist.
Results: Without the CI, subjects interpreted 48.9% (95% confidence interval, 45.0% to 52.8%) of the findings correctly. With the CI, subjects interpreted 55.4% (51.9% to 58.9%) correctly (p < 0.0001). When the CIs that agreed with the gold standard (Correct CIs) were not included, 53.1% (47.7% to 58.5%) of the findings were interpreted correctly. When the correct CI was included, accuracy increased to 68.1% (63.2% to 72.7%; p < 0.0001). When computer advice that did not agree with the gold standard (Incorrect CI) was not provided to the subjects, 56.7% (48.5% to 64.5%) of findings were interpreted correctly. Accuracy dropped to 48.3% (40.4% to 56.4%) when the incorrect computer advice was provided (p = 0.131). Subjects erroneously agreed with the incorrect CI more often when it was presented with the EKG 67.7% (57.2% to 76.7%) than when it was not 34.6% (23.8% to 47.3%; p < 0.0001).
Conclusions: Computer decision support systems can generally improve the interpretive accuracy of internal medicine residents in reading EKGs. However, subjects were influenced significantly by incorrect advice, which tempers the overall usefulness of computer-generated advice in this and perhaps other areas.