Interaction of face and voice areas during speaker recognition

J Cogn Neurosci. 2005 Mar;17(3):367-76. doi: 10.1162/0898929053279577.

Abstract

Face and voice processing contribute to person recognition, but it remains unclear how the segregated specialized cortical modules interact. Using functional neuroimaging, we observed cross-modal responses to voices of familiar persons in the fusiform face area, as localized separately using visual stimuli. Voices of familiar persons only activated the face area during a task that emphasized speaker recognition over recognition of verbal content. Analyses of functional connectivity between cortical territories show that the fusiform face region is coupled with the superior temporal sulcus voice region during familiar speaker recognition, but not with any of the other cortical regions normally active in person recognition or in other tasks involving voices. These findings are relevant for models of the cognitive processes and neural circuitry involved in speaker recognition. They reveal that in the context of speaker recognition, the assessment of person familiarity does not necessarily engage supramodal cortical substrates but can result from the direct sharing of information between auditory voice and visual face regions.

Publication types

  • Comparative Study

MeSH terms

  • Acoustic Stimulation / methods
  • Adult
  • Analysis of Variance
  • Auditory Perception / physiology
  • Brain Mapping
  • Cerebral Cortex / anatomy & histology
  • Cerebral Cortex / blood supply
  • Cerebral Cortex / physiology*
  • Face*
  • Female
  • Functional Laterality
  • Humans
  • Image Processing, Computer-Assisted
  • Magnetic Resonance Imaging / methods
  • Male
  • Neural Networks, Computer
  • Oxygen / blood
  • Reaction Time / physiology
  • Recognition, Psychology / physiology*
  • Verbal Behavior / physiology*
  • Visual Perception / physiology
  • Voice / physiology*

Substances

  • Oxygen