Crossmodal integration of emotional information from face and voice in the infant brain

Dev Sci. 2006 May;9(3):309-15. doi: 10.1111/j.1467-7687.2006.00494.x.

Abstract

We examined 7-month-old infants' processing of emotionally congruent and incongruent face-voice pairs using ERP measures. Infants watched facial expressions (happy or angry) and, after a delay of 400 ms, heard a word spoken with a prosody that was either emotionally congruent or incongruent with the face being presented. The ERP data revealed that the amplitude of a negative component and a subsequent positive component in infants' ERPs varied as a function of crossmodal emotional congruity. An emotionally incongruent prosody elicited a larger negative component in infants' ERPs than did an emotionally congruent prosody. Conversely, the amplitude of infants' positive component was larger to emotionally congruent than to incongruent prosody. Previous work has shown that an attenuation of the negative component and an enhancement of the later positive component in infants' ERPs reflect the recognition of an item. Thus, the current findings suggest that 7-month-olds integrate emotional information across modalities and recognize common affect in the face and voice.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Brain / physiology*
  • Emotions*
  • Evoked Potentials / physiology
  • Facial Expression*
  • Female
  • Humans
  • Infant
  • Male
  • Photic Stimulation
  • Voice*