Visual phonetic processing localized using speech and nonspeech face gestures in video and point-light displays

Hum Brain Mapp. 2011 Oct;32(10):1660-76. doi: 10.1002/hbm.21139. Epub 2010 Sep 17.

Abstract

The talking face affords multiple types of information. To isolate cortical sites with responsibility for integrating linguistically relevant visual speech cues, speech and nonspeech face gestures were presented in natural video and point-light displays during fMRI scanning at 3.0T. Participants with normal hearing viewed the stimuli and also viewed localizers for the fusiform face area (FFA), the lateral occipital complex (LOC), and the visual motion (V5/MT) regions of interest (ROIs). The FFA, the LOC, and V5/MT were significantly less activated for speech relative to nonspeech and control stimuli. Distinct activation of the posterior superior temporal sulcus and the adjacent middle temporal gyrus to speech, independent of media, was obtained in group analyses. Individual analyses showed that speech and nonspeech stimuli were associated with adjacent but different activations, with the speech activations more anterior. We suggest that the speech activation area is the temporal visual speech area (TVSA), and that it can be localized with the combination of stimuli used in this study.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Acoustic Stimulation
  • Adult
  • Analysis of Variance
  • Brain / blood supply
  • Brain / physiology*
  • Brain Mapping*
  • Face*
  • Female
  • Functional Laterality
  • Gestures*
  • Humans
  • Image Processing, Computer-Assisted
  • Magnetic Resonance Imaging / methods
  • Male
  • Motion Perception
  • Oxygen / blood
  • Pattern Recognition, Visual / physiology
  • Phonetics*
  • Photic Stimulation / methods
  • Reaction Time / physiology
  • Speech / physiology*
  • Young Adult

Substances

  • Oxygen