Mechanisms of enhancing visual-speech recognition by prior auditory information

Neuroimage. 2013 Jan 15:65:109-18. doi: 10.1016/j.neuroimage.2012.09.047. Epub 2012 Sep 27.

Abstract

Speech recognition from visual-only faces is difficult, but can be improved by prior information about what is said. Here, we investigated how the human brain uses prior information from auditory speech to improve visual-speech recognition. In a functional magnetic resonance imaging study, participants performed a visual-speech recognition task, indicating whether the word spoken in visual-only videos matched the preceding auditory-only speech, and a control task (face-identity recognition) containing exactly the same stimuli. We localized a visual-speech processing network by contrasting activity during visual-speech recognition with the control task. Within this network, the left posterior superior temporal sulcus (STS) showed increased activity and interacted with auditory-speech areas if prior information from auditory speech did not match the visual speech. This mismatch-related activity and the functional connectivity to auditory-speech areas were specific for speech, i.e., they were not present in the control task. The mismatch-related activity correlated positively with performance, indicating that posterior STS was behaviorally relevant for visual-speech recognition. In line with predictive coding frameworks, these findings suggest that prediction error signals are produced if visually presented speech does not match the prediction from preceding auditory speech, and that this mechanism plays a role in optimizing visual-speech recognition by prior information.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Acoustic Stimulation
  • Adult
  • Brain / physiology*
  • Brain Mapping*
  • Female
  • Humans
  • Magnetic Resonance Imaging
  • Male
  • Photic Stimulation
  • Recognition, Psychology / physiology*
  • Speech Perception / physiology*
  • Visual Perception / physiology*
  • Young Adult