We propose that cross-sensory stimuli presenting a positive attributable source of an aversive sound can modulate negative reactions to the sound. In Experiment 1, participants rated original video sources (OVS) of eight aversive sounds (e.g., nails scratching a chalkboard) as more aversive than eight positive attributable video sources (PAVS) of those same sounds (e.g., someone playing a flute) when these videos were presented silently. In Experiment 2, new participants were presented with those eight aversive sounds in three blocks. In Blocks 1 and 3, the sounds were presented alone; in Block 2, four of the sounds were randomly presented concurrently with their corresponding OVS videos, and the other four with their corresponding PAVS videos. Participants rated each sound, presented with or without video, on three scales: discomfort, unpleasantness, and bodily sensations. We found the concurrent presentation of videos robustly modulates participants' reactions to the sounds: compared to the sounds alone (Block 1), concurrent presentation of PAVS videos significantly reduced negative reactions to the sounds, and the concurrent presentation of OVS videos significantly increased negative reactions, across all three scales. These effects, however, did not linger into Block 3 when the sounds were presented alone again. Our results provide novel evidence that negative reactions to aversive sounds can be modulated through cross-sensory temporal syncing with a positive attributable video source. Although this research was conducted with a neurotypical population, we argue that our findings have implications for the treatment of misophonia.
Keywords: Cross-modal attenuation; audition; emotion; multisensory integration; vision.