Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Apr 27;201491.
doi: 10.1148/radiol.2020201491. Online ahead of print.

AI Augmentation of Radiologist Performance in Distinguishing COVID-19 From Pneumonia of Other Etiology on Chest CT

Affiliations
Free PMC article

AI Augmentation of Radiologist Performance in Distinguishing COVID-19 From Pneumonia of Other Etiology on Chest CT

Harrison X Bai et al. Radiology. .
Free PMC article

Abstract

Background COVID-19 and pneumonia of other etiology share similar CT characteristics, contributing to the challenges in differentiating them with high accuracy. Purpose To establish and evaluate an artificial intelligence (AI) system in differentiating COVID-19 and other pneumonia on chest CT and assess radiologist performance without and with AI assistance. Methods 521 patients with positive RT-PCR for COVID-19 and abnormal chest CT findings were retrospectively identified from ten hospitals from January 2020 to April 2020. 665 patients with non-COVID-19 pneumonia and definite evidence of pneumonia on chest CT were retrospectively selected from three hospitals between 2017 and 2019. To classify COVID-19 versus other pneumonia for each patient, abnormal CT slices were input into the EfficientNet B4 deep neural network architecture after lung segmentation, followed by two-layer fully-connected neural network to pool slices together. Our final cohort of 1,186 patients (132,583 CT slices) was divided into training, validation and test sets in a 7:2:1 and equal ratio. Independent testing was performed by evaluating model performance on separate hospitals. Studies were blindly reviewed by six radiologists without and then with AI assistance. Results Our final model achieved a test accuracy of 96% (95% CI: 90-98%), sensitivity 95% (95% CI: 83-100%) and specificity of 96% (95% CI: 88-99%) with Receiver Operating Characteristic (ROC) AUC of 0.95 and Precision-Recall (PR) AUC of 0.90. On independent testing, our model achieved an accuracy of 87% (95% CI: 82-90%), sensitivity of 89% (95% CI: 81-94%) and specificity of 86% (95% CI: 80-90%) with ROC AUC of 0.90 and PR AUC of 0.87. Assisted by the models' probabilities, the radiologists achieved a higher average test accuracy (90% vs. 85%, Δ=5, p<0.001), sensitivity (88% vs. 79%, Δ=9, p<0.001) and specificity (91% vs. 88%, Δ=3, p=0.001). Conclusion AI assistance improved radiologists' performance in distinguishing COVID-19 from non-COVID-19 pneumonia on chest CT.

Figures

Diagram illustrating patient inclusion and exclusion. Abbreviations:
RIH, Rhode Island Hospital; HUP, Hospital of the University of Pennsylvania;
AI, artificial intelligence; RT-PCR, reverse transcriptase polymerase chain
reaction.
Figure 1.
Diagram illustrating patient inclusion and exclusion. Abbreviations: RIH, Rhode Island Hospital; HUP, Hospital of the University of Pennsylvania; AI, artificial intelligence; RT-PCR, reverse transcriptase polymerase chain reaction.
Flow diagram illustrating our AI model for distinguishing COVID-19
from non-COVID-19 pneumonia. Abbreviations: ROC AUC: Receiver Operator
Characteristics Area Under the Curve; PR AUC: Precision Recall area under
curve.
Figure 2.
Flow diagram illustrating our AI model for distinguishing COVID-19 from non-COVID-19 pneumonia. Abbreviations: ROC AUC: Receiver Operator Characteristics Area Under the Curve; PR AUC: Precision Recall area under curve.
COVID-19 Classification Neural Network Model.
Figure 3:
COVID-19 Classification Neural Network Model.
ROC curve of deep neural network on the test set compared to
radiologist performance. ROC = receiver operating curve.
Figure 4.
ROC curve of deep neural network on the test set compared to radiologist performance. ROC = receiver operating curve.
Representative slices corresponding to Grad-CAM images on the test
set.
Figure 5.
Representative slices corresponding to Grad-CAM images on the test set.
Representative cases that the majority of radiologists misclassified.
A-C (top row, left to right): COVID-19 pneumonia. Our model correctly
classified all three cases. A. 4/6 radiologists (radiologists 3-6) said it
was non-COVID-19. With AI assistance, 2/6 radiologists (radiologists 5 and
6) continued to say it was non-COVID-19. B. 4/6 radiologists (radiologists
3-6) said it was non-COVID-19. With AI assistance, 3/6 radiologists
(radiologists 3-5) continued to say it was non-COVID-19. C. 4/6 radiologists
(radiologists 2 and 4-6) said it was non-COVID-19. With AI assistance, 1/6
radiologist (radiologist 2) continued to say it was non-COVID-19. D-F
(bottom row, left to right): Non-COVID-19 pneumonia. Our model correctly
classified D and E. D. 5/6 radiologists (radiologists 1-5) said it was
COVID-19. With AI assistance, all 5/6 radiologists continued to say it was
COVID-19. E. 4/6 radiologists (radiologists 1, 2, 4, and 6) said it was
COVID-19. With AI assistance, 3/6 radiologists (radiologists 1, 2, and 4)
continued to say it was COVID-19. F. 4/6 radiologists (radiologists 1-3 and
6) said it was COVID-19. With AI assistance, 5/6 radiologists (radiologists
1-4 and 6) said it was COVID-19.
Figure 6.
Representative cases that the majority of radiologists misclassified. A-C (top row, left to right): COVID-19 pneumonia. Our model correctly classified all three cases. A. 4/6 radiologists (radiologists 3-6) said it was non-COVID-19. With AI assistance, 2/6 radiologists (radiologists 5 and 6) continued to say it was non-COVID-19. B. 4/6 radiologists (radiologists 3-6) said it was non-COVID-19. With AI assistance, 3/6 radiologists (radiologists 3-5) continued to say it was non-COVID-19. C. 4/6 radiologists (radiologists 2 and 4-6) said it was non-COVID-19. With AI assistance, 1/6 radiologist (radiologist 2) continued to say it was non-COVID-19. D-F (bottom row, left to right): Non-COVID-19 pneumonia. Our model correctly classified D and E. D. 5/6 radiologists (radiologists 1-5) said it was COVID-19. With AI assistance, all 5/6 radiologists continued to say it was COVID-19. E. 4/6 radiologists (radiologists 1, 2, 4, and 6) said it was COVID-19. With AI assistance, 3/6 radiologists (radiologists 1, 2, and 4) continued to say it was COVID-19. F. 4/6 radiologists (radiologists 1-3 and 6) said it was COVID-19. With AI assistance, 5/6 radiologists (radiologists 1-4 and 6) said it was COVID-19.

Similar articles

See all similar articles

Cited by 7 articles

See all "Cited by" articles

References

    1. Bai Y, , Yao L, , Wei T, , Tian F, , Jin DY, , Chen L, , et al. . Presumed Asymptomatic Carrier Transmission of COVID-19 . JAMA . 2020. . Epub 2020/02/23. doi: 10.1001/jama.2020.2565. PubMed PMID: 32083643; PubMed Central PMCID: PMCPMC7042844 . - PMC - PubMed
    1. Qiu H, , Wu J, , Hong L, , Luo Y, , Song Q, , Chen D. . Clinical and epidemiological features of 36 children with coronavirus disease 2019 (COVID-19) in Zhejiang, China: an observational cohort study . The Lancet Infectious Diseases . 2020. . - PMC - PubMed
    1. Huang C, , Wang Y, , Li X, , Ren L, , Zhao J, , Hu Y, , et al. . Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China . Lancet . 2020. ; 395 ( 10223 ): 497 - 506 . Epub 2020/01/28. doi: 10.1016/S0140-6736(20)30183-5. PubMed PMID: 31986264 . - PMC - PubMed
    1. Wang D, , Hu B, , Hu C, , Zhu F, , Liu X, , Zhang J, , et al. . Clinical characteristics of 138 hospitalized patients with 2019 novel coronavirus–infected pneumonia in Wuhan, China . Jama . 2020. . - PMC - PubMed
    1. Bai HX, , Hsieh B, , Xiong Z, , Halsey K, , Choi JW, , Tran TML, , et al. . Performance of radiologists in differentiating COVID-19 from viral pneumonia on chest CT . Radiology . 2020. : 200823 -. - PMC - PubMed

LinkOut - more resources

Feedback