Voice and emotion processing in the human neonatal brain

J Cogn Neurosci. 2012 Jun;24(6):1411-9. doi: 10.1162/jocn_a_00214. Epub 2012 Feb 23.

Abstract

Although the voice-sensitive neural system emerges very early in development, it has yet to be demonstrated whether the neonatal brain is sensitive to voice perception. We measured the EEG mismatch response (MMR) elicited by emotionally spoken syllables "dada" along with correspondingly synthesized nonvocal sounds, whose fundamental frequency contours were matched, in 98 full-term newborns aged 1-5 days. In Experiment 1, happy syllables relative to nonvocal sounds elicited an MMR lateralized to the right hemisphere. In Experiment 2, fearful syllables elicited stronger amplitudes than happy or neutral syllables, and this response had no sex differences. In Experiment 3, angry versus happy syllables elicited an MMR, although their corresponding nonvocal sounds did not. Here, we show that affective discrimination is selectively driven by voice processing per se rather than low-level acoustical features and that the cerebral specialization for human voice and emotion processing emerges over the right hemisphere during the first days of life.

Publication types

  • Randomized Controlled Trial
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Acoustic Stimulation / methods*
  • Adult
  • Brain / growth & development*
  • Discrimination Learning / physiology
  • Emotions / physiology*
  • Female
  • Humans
  • Infant, Newborn
  • Male
  • Speech Perception / physiology*
  • Voice*