Background: Computer aids can affect decisions in complex ways, potentially even making them worse; common assessment methods may miss these effects. We developed a method for estimating the quality of decisions, as well as how computer aids affect it, and applied it to computer-aided detection (CAD) of cancer, reanalyzing data from a published study where 50 professionals ("readers") interpreted 180 mammograms, both with and without computer support.
Method: We used stepwise regression to estimate how CAD affected the probability of a reader making a correct screening decision on a patient with cancer (sensitivity), thereby taking into account the effects of the difficulty of the cancer (proportion of readers who missed it) and the reader's discriminating ability (Youden's determinant). Using regression estimates, we obtained thresholds for classifying a posteriori the cases (by difficulty) and the readers (by discriminating ability).
Results: Use of CAD was associated with a 0.016 increase in sensitivity (95% confidence interval [CI], 0.003-0.028) for the 44 least discriminating radiologists for 45 relatively easy, mostly CAD-detected cancers. However, for the 6 most discriminating radiologists, with CAD, sensitivity decreased by 0.145 (95% CI, 0.034-0.257) for the 15 relatively difficult cancers.
Conclusions: Our exploratory analysis method reveals unexpected effects. It indicates that, despite the original study detecting no significant average effect, CAD helped the less discriminating readers but hindered the more discriminating readers. Such differential effects, although subtle, may be clinically significant and important for improving both computer algorithms and protocols for their use. They should be assessed when evaluating CAD and similar warning systems.