An Explainable and Robust Deep Learning Approach for Automated Electroencephalography-based Schizophrenia Diagnosis

bioRxiv [Preprint]. 2023 Oct 30:2023.05.27.542592. doi: 10.1101/2023.05.27.542592.

Abstract

Schizophrenia (SZ) is a neuropsychiatric disorder that affects millions globally. Current diagnosis of SZ is symptom-based, which poses difficulty due to the variability of symptoms across patients. To this end, many recent studies have developed deep learning methods for automated diagnosis of SZ, especially using raw EEG, which provides high temporal precision. For such methods to be productionized, they must be both explainable and robust. Explainable models are essential to identify biomarkers of SZ, and robust models are critical to learn generalizable patterns, especially amidst changes in the implementation environment. One common example is channel loss during EEG recording, which could be detrimental to classifier performance. In this study, we developed a novel channel dropout (CD) approach to increase the robustness of explainable deep learning models trained on EEG data for SZ diagnosis to channel loss. We developed a baseline convolutional neural network (CNN) architecture and implement our approach as a CD layer added to the baseline (CNN-CD). We then applied two explainability approaches to both models for insight into learned spatial and spectral features and show that the application of CD decreases model sensitivity to channel loss. The CNN and CNN-CD achieved accuracies of 81.9% and 80.9% on testing data, respectively. Furthermore, our models heavily prioritized the parietal electrodes and the α-band, which is supported by existing literature. It is our hope that this study motivates the further development of explainable and robust models and bridges the transition from research to application in a clinical decision support role.

Keywords: deep learning; explainable AI; model robustness; schizophrenia.

Publication types

  • Preprint