Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Nov 9;15(11):e0242013.
doi: 10.1371/journal.pone.0242013. eCollection 2020.

CheXLocNet: Automatic localization of pneumothorax in chest radiographs using deep convolutional neural networks

Affiliations
Free PMC article

CheXLocNet: Automatic localization of pneumothorax in chest radiographs using deep convolutional neural networks

Hongyu Wang et al. PLoS One. .
Free PMC article

Abstract

Background: Pneumothorax can lead to a life-threatening emergency. The experienced radiologists can offer precise diagnosis according to the chest radiographs. The localization of the pneumothorax lesions will help to quickly diagnose, which will be benefit for the patients in the underdevelopment areas lack of the experienced radiologists. In recent years, with the development of large neural network architectures and medical imaging datasets, deep learning methods have become a methodology of choice for analyzing medical images. The objective of this study was to the construct convolutional neural networks to localize the pneumothorax lesions in chest radiographs.

Methods and findings: We developed a convolutional neural network, called CheXLocNet, for the segmentation of pneumothorax lesions. The SIIM-ACR Pneumothorax Segmentation dataset was used to train and validate CheXLocNets. The training dataset contained 2079 radiographs with the annotated lesion areas. We trained six CheXLocNets with various hyperparameters. Another 300 annotated radiographs were used to select parameters of these CheXLocNets as the validation set. We determined the optimal parameters by the AP50 (average precision at the intersection over union (IoU) equal to 0.50), a segmentation evaluation metric used by several well-known competitions. Then CheXLocNets were evaluated by a test set (1082 normal radiographs and 290 disease radiographs), based on the classification metrics: area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and positive predictive value (PPV); segmentation metrics: IoU and Dice score. For the classification, CheXLocNet with best sensitivity produced an AUC of 0.87, sensitivity of 0.78 (95% CI 0.73-0.83), and specificity of 0.78 (95% CI 0.76-0.81). CheXLocNet with best specificity produced an AUC of 0.79, sensitivity of 0.46 (95% CI 0.40-0.52), and specificity of 0.92 (95% CI 0.90-0.94). For the segmentation, CheXLocNet with best sensitivity produced an IoU of 0.69 and Dice score of 0.72. CheXLocNet with best specificity produced an IoU of 0.77 and Dice score of 0.79. We combined them to form an ensemble CheXLocNet. The ensemble CheXLocNet produced an IoU of 0.81 and Dice score of 0.82. Our CheXLocNet succeeded in automatically detecting pneumothorax lesions, without any human guidance.

Conclusions: In this study, we proposed a deep learning network, called, CheXLocNet, for the automatic segmentation of chest radiographs to detect pneumothorax. Our CheXLocNets generated accurate classification results and high-quality segmentation masks for the pneumothorax at the same time. This technology has the potential to improve healthcare delivery and increase access to chest radiograph expertise for the detection of diseases. Furthermore, the segmentation results can offer comprehensive geometric information of lesions, which can benefit monitoring the sequential development of lesions with high accuracy. Thus, CheXLocNets can be further extended to be a reliable clinical decision support tool. Although we used transfer learning in training CheXLocNet, the parameters of CheXLocNet was still large for the radiograph dataset. Further work is necessary to prune CheXLocNet suitable for the radiograph dataset.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. The framework of CheXLocNet.
The features were extracted from the origin radiographs by the backbone network. RoI was screened out by RPN. RPN produced two losses LCLSR and LREGR during the training time. The classification network and the mask network produced their losses and predictions, respectively. The softmax was used for outputting the probability for being a lesion area for each RoI. The per-pixel sigmoid was used for outputting a mask. RoI, rectangular region of interest; RPN, region proposal network.
Fig 2
Fig 2. AP50 of CheXLocNets.
IoU, intersection over union; AP50, average precision at IoU = 0.50.
Fig 3
Fig 3. ROC curves of CheXLocNets on validation set.
Each plot illustrates the ROC curves of CheXLocNets on the validation set. The ROC curve of the algorithm is generated by varying the discrimination threshold (used to convert the output probabilities to binary predictions). ROC, receiver operating characteristic.
Fig 4
Fig 4. The working procedure of six CheXLocNets.
We first trained and evaluated six CheXLocNets separately. Then we selected the two CheXLocNets with the best sensitivity or the best specificity to join together forming an ensemble model.
Fig 5
Fig 5. ROC curves of CheXLocNets on testing set.
Each plot illustrates the ROC curves of CheXLocNets on the testing set. The ROC curve of the algorithm is generated by varying the discrimination threshold (used to convert the output probabilities to binary predictions). ROC, receiver operating characteristic.
Fig 6
Fig 6. An example of chest radiology report.
We highlight the location of the pneumothorax lesion in the chest radiograph (left). The probabilities of segmentation output by CheXLocNet are present in varying shades of red (right). CheXLocNet correctly detected the pneumothorax and masked the lesion area roughly.

Similar articles

Cited by

References

    1. Raoof S, Feigin D, Sung A, Raoof S, Irugulpati L, Rosenow EC III. Interpretation of plain chest roentgenogram. Chest. 2012;141(2):545–558. 10.1378/chest.10-1302 - DOI - PubMed
    1. Cireşan DC, Giusti A, Gambardella LM, Schmidhuber J. Mitosis detection in breast cancer histology images with deep neural networks. In: International conference on medical image computing and computer-assisted intervention. Springer; 2013. p. 411–418. - PubMed
    1. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention. Springer; 2015. p. 234–241.
    1. Drozdzal M, Vorontsov E, Chartrand G, Kadoury S, Pal C. The importance of skip connections in biomedical image segmentation In: Deep Learning and Data Labeling for Medical Applications. Springer; 2016. p. 179–187.
    1. Lopez-Garnier S, Sheen P, Zimic M. Automatic diagnostics of tuberculosis using convolutional neural networks analysis of MODS digital images. PloS one. 2019;14(2):e0212094 10.1371/journal.pone.0212094 - DOI - PMC - PubMed

Publication types

MeSH terms

Grants and funding

This study was funded by the National Natural Science Foundation of China (grant number 61633006(PQ received) 81872247(PQ, HW received)), URL http://www.nsfc.gov.cn/. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.