Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2019 Jul;22(7):1057-1060.
doi: 10.1038/s41593-019-0410-7. Epub 2019 Jun 10.

Divergence in the functional organization of human and macaque auditory cortex revealed by fMRI responses to harmonic tones

Affiliations
Comparative Study

Divergence in the functional organization of human and macaque auditory cortex revealed by fMRI responses to harmonic tones

Sam V Norman-Haignere et al. Nat Neurosci. 2019 Jul.

Abstract

We report a difference between humans and macaque monkeys in the functional organization of cortical regions implicated in pitch perception. Humans but not macaques showed regions with a strong preference for harmonic sounds compared to noise, measured with both synthetic tones and macaque vocalizations. In contrast, frequency-selective tonotopic maps were similar between the two species. This species difference may be driven by the unique demands of speech and music perception in humans.

PubMed Disclaimer

Conflict of interest statement

Competing interests

Authors declare no competing financial and/or non-financial interests in relation to the work described in this paper.

Figures

Fig 1.
Fig 1.. Assessing tonotopy and selectivity for harmonic tones vs. noise.
a, 5×2 factorial design: harmonic tones (harmonics 3–6 of the F0) and spectrally matched Gaussian noise, each presented in five frequency ranges. Plots show estimated cochlear response magnitudes vs. frequency for example notes from each condition. Noise notes had slightly higher intensity (73 vs. 68 dB) to approximately equate perceived loudness in humans. b, Stimuli from the same condition were presented in a block. Scanning and stimulus presentation alternated to avoid scanner noises interfering with stimulus presentation. Each stimulus comprised several notes. The F0 and frequency range were jittered from note-to-note to minimize adaptation. Cochleagrams (plotting energy vs. time and frequency) are plotted for a mid-frequency harmonic tone stimulus (left) and spectrally-matched noise stimulus (right). Noise was used to mask distortion products. c, Voxels showing greater responses to low frequencies (blue, black outlines) versus high frequencies (yellow, white outlines) collapsing across tone and noise conditions (number of blocks per low/high-frequency condition: M1=504, M2=408, H1=32, H2=32). d, Voxels showing greater responses to harmonic tones (yellow) vs noise (blue) collapsing across frequency (number of blocks per tone/noise condition: M1=630, M2=510, H1=40, H2=40). Maps are shown for the two human and macaque subjects with the highest response reliability. Maps plot uncorrected voxel-wise significance values (two-sided p < 0.01 via a permutation test across conditions; Supplementary Fig 1 plots cluster-corrected maps).
Fig 2.
Fig 2.. ROI analyses controlling for data reliability.
a, Test-retest response reliability (Pearson correlation) vs. data quantity. Blue lines show the number of blocks in each human needed to approximately match the response reliability of one monkey. Error bars show 1 standard deviation across subsampled sets of runs. b, The average response reliability of the human and macaque data, and subsampled human data (dots represent subjects). c, d, ROI analyses applied to reliability-matched data. For each subject, we selected the top 5% of sound-driven voxels with the most significant response preference for low vs. high-frequencies (c) or tones vs. noise (d). A standard selectivity metric was applied to the average response of the selected voxels (measured in independent data). e, f, Same as panels (c, d) but varying the ROI size (percent of voxels selected) and showing data from individual subjects in addition to group-averaged data. Error bars here and elsewhere plot one standard error of the bootstrapped sampling distribution (median and central 68%). Bootstrapping was performed across runs for individual subjects, and across both subjects and runs for group data (each stimulus condition was presented once per run; see ROI Statistics in Methods).
Fig 3.
Fig 3.. Control experiments.
a, Experiment IB. Maps of tone vs. noise responses averaged across frequency and three matched sound intensities (70, 75, and 80 dB) in two macaques. Conventions and statistics the same as Fig 1d (number of blocks per tone/noise condition: M4=1395, M5=1380). b, ROI analyses for the same tone vs. noise contrast. Human data from Experiment IA (with non-matched sound intensities) was used for comparison. Conventions and error bars the same as Fig 2f. c, ROI responses broken down by sound intensity for a fixed ROI size (top 1% of sound-driven voxels) (error bars the same as panel b / Fig 2f). d, Experiment II. Cochleagrams showing the stimulus conditions: voiced macaque vocalizations, containing harmonics, and noise-vocoded controls, which lack harmonics but have the same spectrotemporal envelope. e, Maps of responses to voiced vs. noise-vocoded macaque calls, in two humans (left) and two macaque monkeys (right). Maps plot uncorrected voxel-wise significance values (two-sided p < 0.01; Supplementary Fig 10 plots uncorrected and cluster-corrected maps from all subjects). Conventions and statistics the same as Fig 1d (number of blocks per condition being compared: M1=288, M4=414, H4=24, H5=22). f, ROI analyses for the same voiced vs. noise-vocoded contrast. Conventions and error bars the same as Fig 2f.

Similar articles

Cited by

References

    1. Lafer-Sousa R, Conway BR & Kanwisher NG Color-biased regions of the ventral visual pathway lie between face- and place-selective regions in humans, as in macaques. J. Neurosci 36, 1682–1697 (2016). - PMC - PubMed
    1. Van Essen DC & Glasser MF Parcellating cerebral cortex: How invasive animal studies inform noninvasive mapmaking in humans. Neuron 99, 640–663 (2018). - PMC - PubMed
    1. de Cheveigné A Pitch perception. Oxf. Handb. Audit. Sci. Hear 3, 71 (2010).
    1. Patterson RD, Uppenkamp S, Johnsrude IS & Griffiths TD The processing of temporal pitch and melody information in auditory cortex. Neuron 36, 767–776 (2002). - PubMed
    1. Penagos H, Melcher JR & Oxenham AJ A neural representation of pitch salience in nonprimary human auditory cortex revealed with functional magnetic resonance imaging. J. Neurosci 24, 6810–6815 (2004). - PMC - PubMed

Publication types