Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2010 Apr 28;5(4):e10396.
doi: 10.1371/journal.pone.0010396.

A Dominance Hierarchy of Auditory Spatial Cues in Barn Owls

Affiliations
Free PMC article

A Dominance Hierarchy of Auditory Spatial Cues in Barn Owls

Ilana B Witten et al. PLoS One. .
Free PMC article

Abstract

Background: Barn owls integrate spatial information across frequency channels to localize sounds in space.

Methodology/principal findings: We presented barn owls with synchronous sounds that contained different bands of frequencies (3-5 kHz and 7-9 kHz) from different locations in space. When the owls were confronted with the conflicting localization cues from two synchronous sounds of equal level, their orienting responses were dominated by one of the sounds: they oriented toward the location of the low frequency sound when the sources were separated in azimuth; in contrast, they oriented toward the location of the high frequency sound when the sources were separated in elevation. We identified neural correlates of this behavioral effect in the optic tectum (OT, superior colliculus in mammals), which contains a map of auditory space and is involved in generating orienting movements to sounds. We found that low frequency cues dominate the representation of sound azimuth in the OT space map, whereas high frequency cues dominate the representation of sound elevation.

Significance: We argue that the dominance hierarchy of localization cues reflects several factors: 1) the relative amplitude of the sound providing the cue, 2) the resolution with which the auditory system measures the value of a cue, and 3) the spatial ambiguity in interpreting the cue. These same factors may contribute to the relative weighting of sound localization cues in other species, including humans.

Conflict of interest statement

Competing Interests: The authors have declared that no competing interests exist.

Figures

Figure 1
Figure 1. Endpoints of head orienting movements in response to two spatially separated sounds.
In A and C, acoustic stimuli were separated in azimuth; in B and D, stimuli were separated in elevation. A. Data from one owl (Owl L) when sounds were presented either alone or both sounds were presented together, separated in azimuth by 30° and at an elevation of +20°. Top: Azimuth of the high frequency sound was L15°; azimuth of the low frequency sound was R15°. Bottom: Azimuth of the high frequency sound was R15°; azimuth of the low frequency sound was L15°. Blue asterisks represent endpoints of head movements towards the low frequency sound alone; red crosses towards the high frequency sound alone; black circles towards both sounds together. The black cross represents the position of the zeroing visual stimulus; the colored crosses represent the position of the low (blue) and high (red) frequency sound. B. Data from Owl L when sounds were presented either alone or both sounds were presented together, separated in elevation by 30° and at an azimuth of 0°. Top: elevation of the high frequency sound was −15°; elevation of the low frequency sound was +15°. Bottom: elevation of the high frequency sound was +15°; elevation of the low frequency sound was −15°. C. Average endpoints of the head orienting movements for each of the 3 owls towards each sound alone, and towards both sounds together when the sounds were separated by 30° in azimuth. Sound elevation was either −20°, 0°, or +20° (all conditions randomly interleaved and all data are included in this plot). In the plot, the low frequency sound location is represented by R15° (blue dashed line), and the high frequency location by L15° (red dashed line), although in the experiments both relative positions were tested with equal frequency. Data from the different relative stimulus locations were combined because no statistical difference was observed in responses to sounds either to the right or left of the midline (two-tailed t-test, p>.05). Error bars represent STD. 0° corresponds to the position of the visual target for the initial fixation. Each symbol represents a different owl: ○ is Owl B; □ is Owl D; ▵ is Owl L. D. Average endpoints for each of the 3 owls towards each sound alone, and towards both sounds together when the sounds were separated by 30° in elevation. Sound azimuth was either −20°, 0°, or +20° (all conditions randomly interleaved). The low frequency sound location is represented by +15° (blue dashed line), and the high frequency location by −15° (red dashed line), although in the experiments both relative positions were tested with equal frequency. Errorbars represent STD. Data from the different relative stimulus locations were combined because no statistical difference was observed in responses to sounds either above or below the visual plane (two-tailed t-test, p>.05). Number of head movements reported for each owl: Owl B (200), Owl D (404), Owl L (480). We combined data from the relative orientations because no statistical difference was observed in orienting to sounds to the right and the left of the midline (two-tailed t-test, p>.05).
Figure 2
Figure 2. Effect of relative sound level on head orienting movements to simultaneous sounds separated in azimuth.
Histograms of endpoints of head orienting movements for Owl D (left column) and Owl L (right column), representing response to all sound elevations. Either the sounds were equal level (1st row), or the level of the high frequency sound was greater than that of the low frequency sound by 10 dB (2nd row), 20 dB (3rd row), or 30 dB (4th row). In the plot, the low frequency sound location is represented by R15° (blue solid line), and the high frequency location is represented by L15° (red solid line), although in the experiments both relative positions were tested equally often. The dashed lines represent the average orientation to each sound presented alone (blue to the low frequency and red to the high frequency). Downward arrows indicate population averages.
Figure 3
Figure 3. Neural responses at a sample site to individual sounds alone and to paired sounds separated in azimuth.
The stimuli were positioned relative to the broadband RF center. A. Raster plots representing responses to the low frequency sound alone (top raster), the high frequency sound alone (middle raster), or both sounds presented together (bottom raster). When both sounds were presented together, the high and low frequency sounds were separated in virtual space by 30°. The position of the low frequency sound is written in blue type on the y-axis, and the position of the high frequency sound is in red type on the y-axis. The values of the low and the high stimuli differed by 30°, reflecting the spatial displacement between the two stimuli. Sound onset = 0 ms; upward arrow indicates 20 ms, the demarcation between early and late responses. Red arrowheads indicate high frequency center; blue arrowheads, low frequency center. B. Same data as in A, but displayed as tuning curves representing average response rates plotted separately for the early (top; 0–20 ms) and late (bottom; 20–50 ms) time period. Red curves represent responses to the high frequency sound alone, blue curves to the low frequency sound alone, and black curves to both sounds together. The position of the low frequency sound is written in blue type on the x-axis, and the position of the high frequency sound is in red type on the x-axis.
Figure 4
Figure 4. Summary of neural responses to individual sounds alone and paired sounds separated in azimuth.
A. Population averaged responses as a function of stimulus azimuth and time post-stimulus onset (n = 46). Upper panel: responses to the low frequency sound (3–5 kHz).; middle panel: responses to the high frequency sound (7–9 kHz); lower panel: responses to both sounds together. Arrow on time axis demarcates the division between the early and late time period used in panels D and E. B. Normalized post-stimulus time histogram, averaged across all stimulus conditions plotted in A. C. Weighted average of responses to both sounds together as a function of time (from bottom panel of A). Solid circles: statistically shifted from 0 (p<.05; bootstrapped t-test); open circles: not significantly shifted from 0 (p>.05; bootstrapped t-test). Leftward shifts: weighted average favors the high frequency location; Rightward shifts: weighted average favors the low frequency location. D. Same data as 4A, but displayed as tuning curves for the early time period (0–20 ms). E. Same data as 4A, but displayed as tuning curves for the late time period (20–50 ms). F. Histogram of the shift of the weighted average of the responses to both sounds together, relative to the weighted average of the additive prediction (the sum of the responses to each sound alone) for each recorded site for the early time period (0–20 ms). Positive values of shift represent a shift towards the low frequency location. G. Same as F, but for the late time period (20–50 ms).
Figure 5
Figure 5. Neural response as a function of average binaural level for the low and high frequency sound.
Population averaged neural responses (n = 31) to the low (blue) or high (red) frequency sound for the late time period (20–50 ms). Responses at each site were aligned relative to the threshold for each sound and normalized by the maximum response to either sound.
Figure 6
Figure 6. Summary of neural responses to individual sounds alone and to paired sounds separated in elevation.
A. Population averaged responses as a function of stimulus elevation and time post-stimulus onset to each sound by itself and to both sounds together (n = 23). Arrow on time axis demarcates the division between the early and late time period used in panels C and D. B. Weighted average of responses to both sounds together as a function of time (from bottom panel of A). Solid circles: statistically shifted from 0 toward the high frequency location (p<.05; bootstrapped t-test); open circles: not significantly shifted from 0 (p>.05; bootstrapped t-test). C. Same data as 6A, but displayed as tuning curves for the early time period (0–20 ms). D. Same data as in 6A, but displayed as tuning curves for the late time period (20–50 ms).
Figure 7
Figure 7. The effect of relative sound level on the representation of sound location.
Population averages of the early (top row) and late (bottom row) responses to each sound alone and both sounds together for sounds separated by 30° in azimuth. Red curves: responses to the high frequency sound; blue curves: responses to the low frequency sound; black curves: responses to both sounds together. The relative sound levels for the high and low frequency sounds are indicated above each corresponding row of tuning curves. Error bars indicate standard errors. Neural responses were normalized by the maximum response to either sound alone during the depicted time range, rather than across the entire time range as in Figs. 4,6 (Early: 0–20 ms; Late: 20–50 ms).
Figure 8
Figure 8. Acoustic spatial cues generated by the low and high frequency sounds.
ILD (left) and IPD (right), averaged across 8 owls, for the low frequency (3–5 kHz) and high frequency (7–9 kHz) sounds as a function of the elevation and azimuth of the speaker. IPD and ILD were averaged across the frequency range of each sound.

Similar articles

See all similar articles

Cited by 8 articles

See all "Cited by" articles

References

    1. Blauert J. Cambridge, MA: MIT Press; 1997. Spatial Hearing: The Psychophysics of Human Sound Localization.
    1. Best V, Gallun FJ, Carlile S, Shinn-Cunningham BG. Binaural interference and auditory grouping. J Acoust Soc Am. 2007;121:1070–1076. - PubMed
    1. Heller LM, Trahiotis C. Extents of laterality and binaural interference effects. J Acoust Soc Am. 1996;99:3632–3637. - PubMed
    1. McFadden D, Pasanen EG. Lateralization of high frequencies based on interaural time differences. J Acoust Soc Am. 1976;59:634–639. - PubMed
    1. Heller LM, Trahiotis C. Interference in detection of interaural delay in a sinusoidally amplitude-modulated tone produced by a second, spectrally remote sinusoidally amplitude-modulated tone. J Acoust Soc Am. 1995;97:1808–1816. - PubMed

Publication types

Feedback