Uncovering mental representations of smiled speech using reverse correlation

J Acoust Soc Am. 2018 Jan;143(1):EL19. doi: 10.1121/1.5020989.

Abstract

Which spectral cues underlie the perceptual processing of smiles in speech? Here, the question was addressed using reverse-correlation in the case of the isolated vowel [a]. Listeners were presented with hundreds of pairs of utterances with randomly manipulated spectral characteristics and were asked to indicate, in each pair, which was the most smiling. The analyses revealed that they relied on robust spectral representations that specifically encoded vowel's formants. These findings demonstrate the causal role played by formants in the perception of smile. Overall, this paper suggests a general method to estimate the spectral bases of high-level (e.g., emotional/social/paralinguistic) speech representations.

Publication types

  • Research Support, Non-U.S. Gov't
  • Video-Audio Media

MeSH terms

  • Acoustic Stimulation
  • Adolescent
  • Adult
  • Cues*
  • Female
  • Humans
  • Male
  • Smiling*
  • Sound Spectrography
  • Speech Acoustics*
  • Speech Perception*
  • Voice Quality*
  • Young Adult