Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2013 Feb 7:2:34.
doi: 10.12688/f1000research.2-34.v2. eCollection 2013.

Verbal and novel multisensory associative learning in adults

Affiliations

Verbal and novel multisensory associative learning in adults

Joanne M Fifer et al. F1000Res. .

Abstract

To date, few studies have focused on the behavioural differences between the learning of multisensory auditory-visual and intra-modal associations. More specifically, the relative benefits of novel auditory-visual and verbal-visual associations for learning have not been directly compared. In Experiment 1, 20 adult volunteers completed three paired associate learning tasks: non-verbal novel auditory-visual (novel-AV), verbal-visual (verbal-AV; using pseudowords), and visual-visual (shape-VV). Participants were directed to make a motor response to matching novel and arbitrarily related stimulus pairs. Feedback was provided to facilitate trial and error learning. The results of Signal Detection Theory analyses suggested a multisensory enhancement of learning, with significantly higher discriminability measures (d-prime) in both the novel-AV and verbal-AV tasks than the shape-VV task. Motor reaction times were also significantly faster during the verbal-AV task than during the non-verbal learning tasks. Experiment 2 (n = 12) used a forced-choice discrimination paradigm to assess whether a difference in unisensory stimulus discriminability could account for the learning trends in Experiment 1. Participants were significantly slower at discriminating unisensory pseudowords than the novel sounds and visual shapes, which was notable given that these stimuli produced superior learning. Together the findings suggest that verbal information has an added enhancing effect on multisensory associative learning in adults.

PubMed Disclaimer

Conflict of interest statement

Competing interests: No competing interests have been declared.

Figures

Figure 1.
Figure 1.. Auditory and visual stimuli of associative learning tasks.
Auditory and visual stimuli for the novel sound-visual (novel-AV), the verbal-visual (verbal-AV), and the visual-visual (shape-VV) associative learning tasks.
Figure 2.
Figure 2.. Example trial of the associative learning task.
Temporal sequence of a single experimental paired associate learning trial.
Figure 3.
Figure 3.. Percentage accuracy for the associative learning tasks.
Moving average of mean percentage accuracy for trials on the novel-sound auditory-visual (novel-AV), the verbal-visual (verbal-AV), and the visual-visual with red shapes (shape-VV) learning tasks. Shaded areas depict SEM along the moving average.
Figure 4.
Figure 4.. Discriminability measures for associative learning tasks.
A. Overall mean discriminability (d-prime) (± SEM) for the novel-sound auditory-visual (novel-AV), the verbal-visual (verbal-AV), and the visual-visual with red shapes (shape-VV) learning tasks. B. Discriminability (d-prime) first 10 blocks of trials on the novel-AV, the verbal-AV, and the shape-VV learning tasks. Each block comprised 5 consecutive trials.
Figure 5.
Figure 5.. Reaction times for associative learning tasks.
Average motor reaction times (MRTs) (± SEM) for the learning phase (Trials 1–20) and the learnt phase (Trials 41–60) in the novel-sound auditory visual (novel-AV), verbal-visual (verbal-AV), and visual-visual (shape-AV) learning tasks.
Figure 6.
Figure 6.. Reaction times for unisensory stimulus discrimination.
Mean MRTs (+SEM) for each of the discrimination tasks: novel auditory sounds (novel-A), verbal auditory sounds (verbal-A), visual red shapes (shape-V), and the black visual symbol sets (BS1, BS2 and BS3). * p < 0.05.

Similar articles

Cited by

References

    1. Raij T, Uutela K, Hari R: Audiovisual integration of letters in the human brain. Neuron. 2000;28(2):617–625 10.1016/S0896-6273(00)00138-0 - DOI - PubMed
    1. Beauchamp MS, Lee KE, Argall BD, et al. : Integration of auditory and visual information about objects in superior temporal sulcus. Neuron. 2004;41(5):809–823 10.1016/S0896-6273(04)00070-4 - DOI - PubMed
    1. Noppeney U, Josephs O, Hocking J, et al. : The effect of prior visual information on recognition of speech and sounds. Cereb Cortex. 2008;18(3):598–609 10.1093/cercor/bhm091 - DOI - PubMed
    1. Miller J: Divided attention: evidence for coactivation with redundant signals. Cogn Psychol. 1982;14(2):247–279 10.1016/0010-0285(82)90010-X - DOI - PubMed
    1. Lovelace CT, Stein BE, Wallace MT: An irrelevant light enhances auditory detection in humans: a psychophysical analysis of multisensory integration in stimulus detection. Brain Res Cogn Brain Res. 2003;17(2):447–453 10.1016/S0926-6410(03)00160-5 - DOI - PubMed

Grants and funding

We wish to thank Neville and Di Bertalli for their financial support of this study. The Bionics Institute acknowledges the support it receives from the Victorian Government through its Operational Infrastructure Support Program.

LinkOut - more resources