Evaluating EEG-to-text models through noise-based performance analysis

Sci Rep. 2025 Dec 1;16(1):350. doi: 10.1038/s41598-025-29587-x.

Abstract

Brain-computer interfaces (BCIs) have the potential to revolutionize communication for individuals with severe disabilities. EEG-to-text models, which translate brain signals into written language, offer a promising avenue for restoring communication abilities. Recent advancements in machine learning have improved the accuracy and speed of these models, but their true capabilities remain unclear due to limitations in evaluation methodologies. This study critically examines the performance of EEG-to-text models, focusing on their ability to learn from EEG signals rather than simply memorizing patterns. We introduce a novel methodology that compares model performance on EEG data with that on noise inputs. Our findings reveal that many EEG-to-text models perform similarly or even better on noise, suggesting that they may be memorizing patterns rather than truly learning from EEG signals. These results highlight the need for more rigorous benchmarking and evaluation practices in the field of EEG-to-text translation. By addressing the limitations of current methodologies, we can develop more reliable and trustworthy systems that truly harness the potential of brain-computer interfaces for communication.

MeSH terms

  • Brain / physiology
  • Brain-Computer Interfaces*
  • Electroencephalography* / methods
  • Humans
  • Machine Learning
  • Signal Processing, Computer-Assisted
  • Signal-To-Noise Ratio