The time-course of the cross-modal semantic modulation of visual picture processing by naturalistic sounds and spoken words

Multisens Res. 2013;26(4):371-86.

Abstract

The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.

Publication types

  • Comparative Study

MeSH terms

  • Acoustic Stimulation / methods
  • Adult
  • Female
  • Humans
  • Male
  • Pattern Recognition, Visual / physiology*
  • Photic Stimulation / methods
  • Reaction Time / physiology*
  • Semantics*
  • Speech Perception / physiology*
  • Young Adult