Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Mar 26:2019:9237136.
doi: 10.34133/2019/9237136. eCollection 2019.

How Convolutional Neural Networks Diagnose Plant Disease

Affiliations

How Convolutional Neural Networks Diagnose Plant Disease

Yosuke Toda et al. Plant Phenomics. .

Abstract

Deep learning with convolutional neural networks (CNNs) has achieved great success in the classification of various plant diseases. However, a limited number of studies have elucidated the process of inference, leaving it as an untouchable black box. Revealing the CNN to extract the learned feature as an interpretable form not only ensures its reliability but also enables the validation of the model authenticity and the training dataset by human intervention. In this study, a variety of neuron-wise and layer-wise visualization methods were applied using a CNN, trained with a publicly available plant disease image dataset. We showed that neural networks can capture the colors and textures of lesions specific to respective diseases upon diagnosis, which resembles human decision-making. While several visualization methods were used as they are, others had to be optimized to target a specific layer that fully captures the features to generate consequential outputs. Moreover, by interpreting the generated attention maps, we identified several layers that were not contributing to inference and removed such layers inside the network, decreasing the number of parameters by 75% without affecting the classification accuracy. The results provide an impetus for the CNN black box users in the field of plant science to better understand the diagnosis process and lead to further efficient use of deep learning for plant disease diagnosis.

PubMed Disclaimer

Conflict of interest statement

The authors declare that there are no conflicts of interest regarding the publication of this article.

Figures

Figure 1
Figure 1
Image-based disease diagnosis training using convolutional neural networks. (a) The PlantVillage image dataset used in this study. This dataset contains 38 categories of diseased or healthy leaf images. See Figure S2 for the names of species and diseases assigned to each label. (b) InceptionV3-based convolutional neural network (CNN) architecture used in this study. Conv, convolutional layer; Mixed, inception module. (c) Accuracy, precision, recall, and mean F1 scores against the training, validation, and test data using the trained weights. (d) Confusion matrix drawn against the test dataset. See Figure S2 for an enlarged view.
Figure 2
Figure 2
Visualization of intermediate outputs generated by the trained CNN. Image of tomato leaf infected with early blight (label 29) was fed to the network and intermediate output values of representative layers were visualized. Layer or inception module names and their output array sizes are described above each intermediate output.
Figure 3
Figure 3
Feature visualization. Four neurons were randomly selected from the indicated layers and feature visualization was performed to obtain a visual interpretation of what the neurons have learned. Neurons trained with (a) ImageNet or (b) PlantVillage were, respectively, visualized.
Figure 4
Figure 4
Semantic dictionary. Semantic dictionary generated using the intermediate outputs of global average pooling (GAP) layer. (a) Mean GAP output of 200 images of tomato early blight from the test dataset was multiplied by the weights of the CNN and sorted based on its value. (b) Neurons that correspond to the top five output values were selected and feature visualization was, respectively, applied. (c) Representative images of disease symptoms selected from the indicated class. Bottom row is a magnified view of the red inset in the top row.
Figure 5
Figure 5
Evaluation of attention map generating algorithms. (a) Input images from three classes used for evaluation (top row). Numbers in parentheses indicate the class label of the dataset. Lesions in the image were manually annotated (bottom row). (b)-(e) Attention map generating methods applied to each image and displayed as a heatmap over the input. See Materials and Methods for details. (b) Perturbation-based visualization. (c) Gradient-based visualization. (d) Grad-CAM visualization. (e) Reference-based visualization. For Grad-CAM and explanation map, the layers of which the gradient and the intermediate output were used are indicated.
Figure 6
Figure 6
Application of attention map generating algorithms on images misclassified by the CNN. From left to right column: (1), images randomly selected from the dataset which were misclassified by the CNN; correct labels are displayed on top of the image; (2), the top three inferences by the CNN; (3), Grad-CAM-based visualization targeted to Mixed0 layer; (4), guided back-propagation-based visualization.
Figure 7
Figure 7
Effect of feature extraction layer shaving. (a) The accuracy and loss value against the test dataset of CNN whose layers posterior to the indicated layers were removed. We performed a transfer learning with the newly prepared global average pooling and output layers. Since the Mixed5 CNN showed a classification performance equivalent to the original model, further analysis was not performed. (b) Network parameters required to run the CNN.

Similar articles

Cited by

References

    1. Balodi R., Bisht S., Ghatak A., Rao K. H. Plant disease diagnosis: Technological advancements and challenges. Indian Phytopathology. 2017;70(3):275–281.
    1. Martinelli F., Scalenghe R., Davino S., et al. Advanced methods of plant disease detection. A review. Agronomy for Sustainable Development. 2015;35(1):1–25. doi: 10.1007/s13593-014-0246-1. - DOI
    1. West J. S., Bravo C., Oberti R., Lemaire D., Moshou D., McCartney H. A. The potential of optical canopy measurement for targeted control of field crop diseases. Annual Review of Phytopathology. 2003;41:593–614. doi: 10.1146/annurev.phyto.41.121702.103726. - DOI - PubMed
    1. Singh A., Ganapathysubramanian B., Singh A. K., Sarkar S. Machine learning for high-throughput stress phenotyping in plants. Trends in Plant Science. 2016;21(2):110–124. doi: 10.1016/j.tplants.2015.10.015. - DOI - PubMed
    1. Johannes A., Picon A., Alvarez-Gila A., et al. Automatic plant disease diagnosis using mobile capture devices, applied on a wheat use case. Computers and Electronics in Agriculture. 2017;138:200–209. doi: 10.1016/j.compag.2017.04.013. - DOI

LinkOut - more resources