Automatic classification of medical image modality and anatomical location using convolutional neural network

PLoS One. 2021 Jun 11;16(6):e0253205. doi: 10.1371/journal.pone.0253205. eCollection 2021.

Abstract

Modern radiologic images comply with DICOM (digital imaging and communications in medicine) standard, which, upon conversion to other image format, would lose its image detail and information such as patient demographics or type of image modality that DICOM format carries. As there is a growing interest in using large amount of image data for research purpose and acquisition of large amount of medical image is now a standard practice in the clinical setting, efficient handling and storage of large amount of image data is important in both the clinical and research setting. In this study, four classes of images were created, namely, CT (computed tomography) of abdomen, CT of brain, MRI (magnetic resonance imaging) of brain and MRI of spine. After converting these images into JPEG (Joint Photographic Experts Group) format, our proposed CNN architecture could automatically classify these 4 groups of medical images by both their image modality and anatomic location. We achieved excellent overall classification accuracy in both validation and test sets (> 99.5%), specificity and F1 score (> 99%) in each category of this dataset which contained both diseased and normal images. Our study has shown that using CNN for medical image classification is a promising methodology and could work on non-DICOM images, which could potentially save image processing time and storage space.

MeSH terms

  • Abdomen / diagnostic imaging
  • Automation / methods
  • Brain / diagnostic imaging
  • Humans
  • Image Interpretation, Computer-Assisted*
  • Magnetic Resonance Imaging / methods
  • Neural Networks, Computer*
  • Neuroimaging / methods
  • Reproducibility of Results
  • Sensitivity and Specificity
  • Spine / diagnostic imaging
  • Tomography, X-Ray Computed / methods

Grants and funding

The author(s) received no specific funding for this work.