Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2020:1213:3-21.
doi: 10.1007/978-3-030-33128-3_1.

Deep Learning in Medical Image Analysis

Affiliations
Free PMC article
Review

Deep Learning in Medical Image Analysis

Heang-Ping Chan et al. Adv Exp Med Biol. 2020.
Free PMC article

Abstract

Deep learning is the state-of-the-art machine learning approach. The success of deep learning in many pattern recognition applications has brought excitement and high expectations that deep learning, or artificial intelligence (AI), can bring revolutionary changes in health care. Early studies of deep learning applied to lesion detection or classification have reported superior performance compared to those by conventional techniques or even better than radiologists in some tasks. The potential of applying deep-learning-based medical image analysis to computer-aided diagnosis (CAD), thus providing decision support to clinicians and improving the accuracy and efficiency of various diagnostic and treatment processes, has spurred new research and development efforts in CAD. Despite the optimism in this new era of machine learning, the development and implementation of CAD or AI tools in clinical practice face many challenges. In this chapter, we will discuss some of these issues and efforts needed to develop robust deep-learning-based CAD tools and integrate these tools into the clinical workflow, thereby advancing towards the goal of providing reliable intelligent aids for patient care.

Keywords: Artificial intelligence; Big data; Computer-aided diagnosis; Deep learning; Interpretable AI; Machine learning; Medical imaging; Quality assurance; Transfer learning; Validation.

PubMed Disclaimer

Conflict of interest statement

Disclosures

The authors have no conflicts to disclose.

Figures

Fig. 1.
Fig. 1.
Literature search for publications in peer-reviewed journals by Web of Science from 1900 to 2019 using key words: ((imaging OR images) AND (medical OR diagnostic)) AND (machine learning OR deep learning OR neural network OR deep neural network OR convolutional neural network OR computer aid OR computer assist OR computer-aided diagnosis OR automated detection OR computerized detection OR Computer-aided detection OR automated classification OR computerized classification OR decision support OR radiomic) NOT (pathology OR slide OR genomics OR molecule OR genetic OR cell OR protein OR review OR survey)).
Fig. 2.
Fig. 2.
The effect of different number of layers of the DCNN being frozen during transfer learning of ImageNet-pretrained AlexNet to classify malignant and benign masses on mammograms. The area under the receiver operating characteristic curve (AUC) for the test ROIs was plot as box-and-whisker plots of 10 repeated experiments under each condition. The training set and the test set consists of 12,360 and 7,272 ROIs after augmentation, respectively. C0 denotes no layer was frozen, i.e., the pretrained weights in all layers were allowed to be updated. C1 denotes the first convolutional layer was frozen, C1-Ci (i=2, 3, 4, 5) denotes the C1 to Ci convolutional layers were frozen during transfer training. The result shows that C1-frozen training provided the best test AUC for this task. (reprint with permission [49])
Fig. 3.
Fig. 3.
Dependence of test AUC on mammography training sample size using strategy (A) transfer training. The varied training sample size was simulated by random drawing by case of a percentage (ranging from 1% to 100%) from the entire set of 19,632 mammography ROIs. The ROI-based AUC performance for classifying the 9,120 DBT training ROIs (serve as a test set at this stage) for three transfer networks at Stage 1. The data point and the upper and lower range show the mean and standard deviation of the test AUC resulting from ten random samplings of the training set of a given size from the original set. (reprint with permission [49])
Fig. 4.
Fig. 4.
ROI-based AUC on the DBT test set while varying the mammography sample size available for transfer training. The data point and the upper and lower range show the mean and standard deviation of the test AUC resulting from ten random samplings of the training set of a given size from the original set. “A. Stage 1 (MAM:C1)” denotes single stage training using mammography data and the C1-layer frozen during transfer learning without stage 2. “B. Stage 2 (DBT:C1)” denotes stage 2 C1-frozen transfer learning at a fixed (100%) DBT training set size after Stage 1 transfer learning (curve A). “C. Stage 2 (DBT:C1-F4)” denotes Stage 2 C1-to-F4-frozen transfer learning at a fixed (100%) DBT training set size after stage 1 transfer learning (curve A). (reprint with permission [49])
Fig. 5.
Fig. 5.
ROI-based AUC on the DBT test set while varying the simulated DBT sample size available for transfer training. The data point and the upper and lower range show the mean and standard deviation of the test AUC resulting from ten random samplings of the training set of a given size from the original set. “D. Stage 1 (DBT:C1)” denotes single stage training using DBT data with the C1-layer frozen during transfer learning without Stage 2. “B. Stage 2 (DBT:C1)” denotes Stage 2 C1-frozen transfer learning after Stage 1 transfer learning with a fixed (100%) mammography training set. “C. Stage 2 (DBT:C1-F4)” denotes Stage 2 C1-to-F4-frozen transfer learning after Stage 1 transfer learning with a fixed (100%) mammography training set. (reprint with permission [49])

Similar articles

Cited by

References

    1. Winsberg F, Elkin M, Macy J, Bordaz V, Weymouth W. Detection of radiographic abnormalities in mammograms by means of optical scanning and computer analysis. Radiology. 1967;89:211–5.
    1. Kimme C, O’Laughlin BJ, Sklansky J. Automatic detection of suspicious abnormalities in breast radiographs Data Structures, Computer Graphics and Pattern Recognition. New York: Academic Press; 1977.
    1. Spiesberger W. Mammogram inspection by computer. IEEE Trans Biomed Eng. 1979;26:213–9. - PubMed
    1. Semmlow JL, Shadagopappan A, Ackerman LV, Hand W, Alcorn FS. A fully automated system for screening mammograms. Computers and Biomedical Research. 1980;13:350–62. - PubMed
    1. Doi K. Chapter 1. Historical overview In: Li Q, Nishikawa RM, editors. Computer-Aided Detection and Diagnosis in Medical Imaging. Boca Raton, FL: Taylor & Francis Group, LLC, CRC Press; 2015. p. 1–17.