Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022:30:2283-2291.
doi: 10.1109/TNSRE.2022.3198041. Epub 2022 Aug 19.

On the Deep Learning Models for EEG-Based Brain-Computer Interface Using Motor Imagery

On the Deep Learning Models for EEG-Based Brain-Computer Interface Using Motor Imagery

Hao Zhu et al. IEEE Trans Neural Syst Rehabil Eng. 2022.

Abstract

Motor imagery (MI) based brain-computer interface (BCI) is an important BCI paradigm which requires powerful classifiers. Recent development of deep learning technology has prompted considerable interest in using deep learning for classification and resulted in multiple models. Finding the best performing models among them would be beneficial for designing better BCI systems and classifiers going forward. However, it is difficult to directly compare performance of various models through the original publications, since the datasets used to test the models are different from each other, too small, or even not publicly available. In this work, we selected five MI-EEG deep classification models proposed recently: EEGNet, Shallow & Deep ConvNet, MB3D and ParaAtt, and tested them on two large, publicly available, databases with 42 and 62 human subjects. Our results show that the models performed similarly on one dataset while EEGNet performed the best on the second with a relatively small training cost using the parameters that we evaluated.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
Trial structure of two datasets. A trial starts from a relax stage, shown as the blank screen. Then a rectangle target will appear at either side of the screen, giving subject the hint of direction to perform motor imagery. At feedback stage, a circle cursor will appear on the center of the screen and will move towards either side based on motor imagery of the subject. After the cursor reach or miss the target, or the time exceeds the limit, the cursor will be frozen at the post-feedback stage. The length of each stage is summarized in the table.
Fig. 2.
Fig. 2.
Box-plots of classification accuracies of deep learning models and online performance. Lower and upper box boundaries denote 25th and 75th percentiles, respectively. Lines inside box denote median. The “whiskers” extend to points that lie within 1.5 interquartile ranges (IQRs) of the lower and upper quartile, and then observations that fall outside this range are displayed independently. Red dashed lines denote the chance level. Stars denotes the statistically significant differences between model pairs (P values from Wilcoxon signed-rank test, **: P<0.01, ***:P<0.001). All P values have gone through adjustment of false-discovery-rate.
Fig. 3.
Fig. 3.
Distribution of model accuracies from different subjects. Each black point represents an accuracy from a single subject. The violin plot outlines illustrate the density of accuracies, i.e. the width of the colored area represents the proportion of subjects achieving accuracies at that level. Red dashed lines denote the chance level.
Fig. 4.
Fig. 4.
Comparison of models training on preprocessed data (using exponential moving standardization) and original data. Red dash lines mark the online accuracies. There is nearly no improvement in MBT-42 dataset, but significant improvement of all models in Med-62 dataset (P values from Wilcoxon signed-rank test, ***:P<0.001).
Fig. 5.
Fig. 5.
Box-plots of cross-subject classification accuracies of deep learning models over all subjects on Med-62 dataset. Lower and upper box boundaries denote 25th and 75th percentiles, respectively. Lines inside box denote median. The “whiskers” extend to points that lie within 1.5 interquartile ranges (IQRs) of the lower and upper quartile, and then observations that fall outside this range are displayed independently. Red dashed lines denote the chance level. Stars denotes the statistically significant differences between model pairs (P values from Wilcoxon signed-rank test, ***:P<0.001). All P values have gone through adjustment of false-discovery-rate.
Fig. 6.
Fig. 6.
Example of different preprocessing methods on a data snippet. The scale of highpass-filtered data with a cutoff frequency of 1 Hz is almost the same as original data. The scale of data after exponential moving standardization and normal standardization is similar to each other, far smaller than original data, which may benefit deep model training.
Fig. 7.
Fig. 7.
Visualization of zero-centered accuracy vectors on two datasets using t-SNE. No clear subject clusters can be found.

Similar articles

Cited by

References

    1. He B, Yuan H, Meng J, and Gao S, “Brain–computer interfaces,” in Neural Engineering. Cham, Switzerland: Springer, 2020, pp. 131–183.
    1. Wolpaw JR and McFarland DJ, “Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans,” Proc. Nat. Acad. Sci. USA, vol. 101, no. 51, pp. 17849–17854, Dec. 2004. - PMC - PubMed
    1. Purtscheller G and Neuper C, “Motor imagery and direct brain-computer communication,” Proc. IEEE, vol. 89, no. 7, pp. 1123–1134, Jul. 2001.
    1. Yuan H and He B, “Brain–computer interfaces using sensorimotor rhythms: Current state and future perspectives,” IEEE Trans. Biomed. Eng, vol. 61, no. 5, pp. 1425–1435, May 2014. - PMC - PubMed
    1. He B, Baxter B, Edelman BJ, Cline CC, and Ye WW, “Noninvasive brain-computer interfaces based on sensorimotor rhythms,” Proc. IEEE, vol. 103, no. 6, pp. 907–925, Jun. 2015. - PMC - PubMed

Publication types