Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2014 Mar:309:84-93.
doi: 10.1016/j.heares.2013.11.007. Epub 2013 Dec 14.

Auditory cortex activation to natural speech and simulated cochlear implant speech measured with functional near-infrared spectroscopy

Affiliations
Free PMC article
Comparative Study

Auditory cortex activation to natural speech and simulated cochlear implant speech measured with functional near-infrared spectroscopy

Luca Pollonini et al. Hear Res. 2014 Mar.
Free PMC article

Abstract

The primary goal of most cochlear implant procedures is to improve a patient's ability to discriminate speech. To accomplish this, cochlear implants are programmed so as to maximize speech understanding. However, programming a cochlear implant can be an iterative, labor-intensive process that takes place over months. In this study, we sought to determine whether functional near-infrared spectroscopy (fNIRS), a non-invasive neuroimaging method which is safe to use repeatedly and for extended periods of time, can provide an objective measure of whether a subject is hearing normal speech or distorted speech. We used a 140 channel fNIRS system to measure activation within the auditory cortex in 19 normal hearing subjects while they listed to speech with different levels of intelligibility. Custom software was developed to analyze the data and compute topographic maps from the measured changes in oxyhemoglobin and deoxyhemoglobin concentration. Normal speech reliably evoked the strongest responses within the auditory cortex. Distorted speech produced less region-specific cortical activation. Environmental sounds were used as a control, and they produced the least cortical activation. These data collected using fNIRS are consistent with the fMRI literature and thus demonstrate the feasibility of using this technique to objectively detect differences in cortical responses to speech of different intelligibility.

PubMed Disclaimer

Figures

Figure 1
Figure 1. Headset layout
(a) Layout of optical sources (empty circles) and photodetectors (filled circles) of the right hemisphere probe. The optical source located in the middle of the bottom horizontal line of the probe grid was aligned with T4 of the 10–20 international system. The left hemisphere had an identical layout. (b) Layout of shallow and (c) deep channels determined as mid-point between each source-detector pair.
Figure 2
Figure 2. Representative data from two channels with different scalp coupling indices
(a, b) Raw signals for both wavelengths of transmitted light. (c, d) The calculated HbO signals. The plots on the left (a, c) come from a channel with good scalp contact; the plots on the right (b, d) come from a channel with poor scalp contact. The grayed region indicates the time during which the normal speech stimulus was presented.
Figure 3
Figure 3. HbO and HbR data from one channel of a representative subject
The time course of the response for (a) HbO and (b) HbR to five repetitions of the normal speech stimulus. The dashed lines are the predicted hemodynamic responses. The greyed regions indicate the time periods when the stimulus was presented.
Figure 4
Figure 4. Statistical parametric maps of a representative subject
The color-coded maps show the responses to normal speech (N), channelized speech (C), scrambled speech (S), and environmental sounds (E). Each map reports the location of center of mass (*) and peak of activity (o) of active areas. The scalp orientation of each plot is shown in the bottom right.
Figure 5
Figure 5. Grand average of statistical parametric maps
The color-coded maps show the group average of the responses to normal speech (N), channelized speech (C), scrambled speech (S), and environmental sounds (E). The scalp orientation of each plot is shown in the bottom right.
Figure 6
Figure 6. Group average area of activation
Group means (N=19) and standard error of active areas derived from deep maps. Normal speech (N), channelized speech (C), scrambled speech (S), and environmental sounds (E). (*) p<0.05, (**) p<0.01, (**) p<0.001.
Figure 7
Figure 7. Individual responses normalized to normal speech
Activation areas for HbO and HbR for the left and right hemispheres. Areas are normalized to the response area measured with normal speech. Channelized speech (C), scrambled speech (S), and environmental sounds (E).
Figure 8
Figure 8. fMRI activations of a representative subject
(a) BOLD fMRI responses to sound stimuli. Orange-to-yellow color scale shows voxels that were significant activated to sound stimuli (sound > resting baseline); blue color scale shows voxels that were significantly deactivated to sound stimuli (sound < resting baseline). A lateral view of a reconstruction of the cortical surface model is shown for the left hemisphere (left column) and right hemisphere (right column) for normal speech, channelized speech, scrambled speech, environmental sounds. (b) Volume of active cortex for each sound stimulus in left hemisphere (left plot) and right hemisphere (right plot). The volume of voxels exceeding the significance threshold (t > 3.7, q < 0.01) for each stimulus category after a cluster analysis; results show only the largest cluster in each hemisphere, consisting of left and right auditory cortex. Normal speech (N), channelized speech (C), scrambled speech (S), and environmental sounds (E).

Similar articles

Cited by

References

    1. Ayaz H, Izzetoglu M, Shewokis PA, Onaral B. Sliding-window motion artifact rejection for Functional Near-Infrared Spectroscopy. Conf Proc IEEE Eng Med Biol Soc. 2010;2010:6567–6570. - PubMed
    1. Beauchamp MS, Lee KE, Argall BD, Martin A. Integration of auditory and visual information about objects in superior temporal sulcus. Neuron. 2004;41:809–823. - PubMed
    1. Beauchamp MS, Yasar NE, Frye RE, Ro T. Touch, sound and vision in human superior temporal sulcus. Neuroimage. 2008;41:1011–1020. - PMC - PubMed
    1. Belin P. Voice processing in human and non-human primates. Philos Trans R Soc Lond B Biol Sci. 2006;361:2091–2107. - PMC - PubMed
    1. Belin P, Zatorre RJ, Ahad P. Human temporal-lobe response to vocal sounds. Brain Res Cogn Brain Res. 2002;13:17–26. - PubMed

Publication types