Explainability of deep neural networks for MRI analysis of brain tumors

Int J Comput Assist Radiol Surg. 2022 Sep;17(9):1673-1683. doi: 10.1007/s11548-022-02619-x. Epub 2022 Apr 23.

Abstract

Purpose: Artificial intelligence (AI), in particular deep neural networks, has achieved remarkable results for medical image analysis in several applications. Yet the lack of explainability of deep neural models is considered the principal restriction before applying these methods in clinical practice.

Methods: In this study, we propose a NeuroXAI framework for explainable AI of deep learning networks to increase the trust of medical experts. NeuroXAI implements seven state-of-the-art explanation methods providing visualization maps to help make deep learning models transparent.

Results: NeuroXAI has been applied to two applications of the most widely investigated problems in brain imaging analysis, i.e., image classification and segmentation using magnetic resonance (MR) modality. Visual attention maps of multiple XAI methods have been generated and compared for both applications. Another experiment demonstrated that NeuroXAI can provide information flow visualization on internal layers of a segmentation CNN.

Conclusion: Due to its open architecture, ease of implementation, and scalability to new XAI methods, NeuroXAI could be utilized to assist radiologists and medical professionals in the detection and diagnosis of brain tumors in the clinical routine of cancer patients. The code of NeuroXAI is publicly accessible at https://github.com/razeineldin/NeuroXAI .

Keywords: Brain glioma; Computer-aided diagnosis; Convolutional neural networks; Explainable AI.

MeSH terms

  • Artificial Intelligence*
  • Brain Neoplasms* / diagnostic imaging
  • Brain Neoplasms* / pathology
  • Humans
  • Image Processing, Computer-Assisted / methods
  • Magnetic Resonance Imaging / methods
  • Neural Networks, Computer