Low-Frequency Cortical Entrainment to Speech Reflects Phoneme-Level Processing

Curr Biol. 2015 Oct 5;25(19):2457-65. doi: 10.1016/j.cub.2015.08.030. Epub 2015 Sep 24.


The human ability to understand speech is underpinned by a hierarchical auditory system whose successive stages process increasingly complex attributes of the acoustic input. It has been suggested that to produce categorical speech perception, this system must elicit consistent neural responses to speech tokens (e.g., phonemes) despite variations in their acoustics. Here, using electroencephalography (EEG), we provide evidence for this categorical phoneme-level speech processing by showing that the relationship between continuous speech and neural activity is best described when that speech is represented using both low-level spectrotemporal information and categorical labeling of phonetic features. Furthermore, the mapping between phonemes and EEG becomes more discriminative for phonetic features at longer latencies, in line with what one might expect from a hierarchical system. Importantly, these effects are not seen for time-reversed speech. These findings may form the basis for future research on natural language processing in specific cohorts of interest and for broader insights into how brains transform acoustic input into meaning.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Auditory Cortex / physiology*
  • Electroencephalography / methods*
  • Humans
  • Male
  • Phonetics
  • Sound
  • Speech / physiology*