Crossmodal binding of fear in voice and face

Proc Natl Acad Sci U S A. 2001 Aug 14;98(17):10006-10. doi: 10.1073/pnas.171288598. Epub 2001 Aug 7.

Abstract

In social environments, multiple sensory channels are simultaneously engaged in the service of communication. In this experiment, we were concerned with defining the neuronal mechanisms for a perceptual bias in processing simultaneously presented emotional voices and faces. Specifically, we were interested in how bimodal presentation of a fearful voice facilitates recognition of fearful facial expression. By using event-related functional MRI, that crossed sensory modality (visual or auditory) with emotional expression (fearful or happy), we show that perceptual facilitation during face fear processing is expressed through modulation of neuronal responses in the amygdala and the fusiform cortex. These data suggest that the amygdala is important for emotional crossmodal sensory convergence with the associated perceptual bias during fear processing, being mediated by task-related modulation of face-processing regions of fusiform cortex.

Publication types

  • Comparative Study
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Amygdala / blood supply
  • Amygdala / physiology*
  • Auditory Perception / physiology*
  • Brain Mapping*
  • Cerebrovascular Circulation
  • Cues*
  • Dominance, Cerebral
  • Facial Expression*
  • Fear*
  • Female
  • Happiness
  • Humans
  • Magnetic Resonance Imaging
  • Male
  • Netherlands
  • Parietal Lobe / blood supply
  • Parietal Lobe / physiology*
  • Pattern Recognition, Visual / physiology*
  • Voice*