Both stimulus-specific and configurational features of multiple visual stimuli shape the spatial ventriloquism effect

Eur J Neurosci. 2024 Apr;59(7):1770-1788. doi: 10.1111/ejn.16251. Epub 2024 Jan 17.

Abstract

Studies on multisensory perception often focus on simplistic conditions in which one single stimulus is presented per modality. Yet, in everyday life, we usually encounter multiple signals per modality. To understand how multiple signals within and across the senses are combined, we extended the classical audio-visual spatial ventriloquism paradigm to combine two visual stimuli with one sound. The individual visual stimuli presented in the same trial differed in their relative timing and spatial offsets to the sound, allowing us to contrast their individual and combined influence on sound localization judgements. We find that the ventriloquism bias is not dominated by a single visual stimulus but rather is shaped by the collective multisensory evidence. In particular, the contribution of an individual visual stimulus to the ventriloquism bias depends not only on its own relative spatio-temporal alignment to the sound but also the spatio-temporal alignment of the other visual stimulus. We propose that this pattern of multi-stimulus multisensory integration reflects the evolution of evidence for sensory causal relations during individual trials, calling for the need to extend established models of multisensory causal inference to more naturalistic conditions. Our data also suggest that this pattern of multisensory interactions extends to the ventriloquism aftereffect, a bias in sound localization observed in unisensory judgements following a multisensory stimulus.

Keywords: audiovisual; cross‐modal; multisensory; multisensory causal inference; temporal asynchrony.

MeSH terms

  • Acoustic Stimulation
  • Auditory Perception*
  • Humans
  • Photic Stimulation
  • Sound Localization*
  • Visual Perception