Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Comparative Study
. 2006 Jan 4;26(1):73-85.
doi: 10.1523/JNEUROSCI.2356-05.2006.

Visual and nonvisual contributions to three-dimensional heading selectivity in the medial superior temporal area

Affiliations
Comparative Study

Visual and nonvisual contributions to three-dimensional heading selectivity in the medial superior temporal area

Yong Gu et al. J Neurosci. .

Abstract

Robust perception of self-motion requires integration of visual motion signals with nonvisual cues. Neurons in the dorsal subdivision of the medial superior temporal area (MSTd) may be involved in this sensory integration, because they respond selectively to global patterns of optic flow, as well as translational motion in darkness. Using a virtual-reality system, we have characterized the three-dimensional (3D) tuning of MSTd neurons to heading directions defined by optic flow alone, inertial motion alone, and congruent combinations of the two cues. Among 255 MSTd neurons, 98% exhibited significant 3D heading tuning in response to optic flow, whereas 64% were selective for heading defined by inertial motion. Heading preferences for visual and inertial motion could be aligned but were just as frequently opposite. Moreover, heading selectivity in response to congruent visual/vestibular stimulation was typically weaker than that obtained using optic flow alone, and heading preferences under congruent stimulation were dominated by the visual input. Thus, MSTd neurons generally did not integrate visual and nonvisual cues to achieve better heading selectivity. A simple two-layer neural network, which received eye-centered visual inputs and head-centered vestibular inputs, reproduced the major features of the MSTd data. The network was trained to compute heading in a head-centered reference frame under all stimulus conditions, such that it performed a selective reference-frame transformation of visual, but not vestibular, signals. The similarity between network hidden units and MSTd neurons suggests that MSTd may be an early stage of sensory convergence involved in transforming optic flow information into a (head-centered) reference frame that facilitates integration with vestibular signals.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Experimental setup and heading stimuli. A, Schematic illustration of the virtualreality apparatus. The monkey, eye-movement monitoring system (field coil), and projector sit on top of a motion platform with six degrees of freedom. B, Illustration of the 26 movement vectors used to measure 3D heading tuning curves. C, Normalized population responses to visual and vestibular stimuli (gray curves) are superimposed on the stimulus velocity and acceleration profiles (solid and dashed black lines). The dotted vertical lines illustrate the 1 s analysis interval used to calculate mean firing rates.
Figure 2.
Figure 2.
Examples of 3D heading tuning functions for three MSTd neurons. Color contour maps show the mean firing rate as a function of azimuth and elevation angles. Each contour map shows the Lambert cylindrical equal-area projection of the original spherical data (see Materials and Methods) (Snyder, 1987). In this projection, the ordinate is a sinusoidally transformed version of elevation angle. Tuning curves along the margins of each color map illustrate mean ± SEM firing rates plotted as a function of either elevation or azimuth (averaged across azimuth or elevation, respectively). Data from the vestibular, visual, and combined stimulus conditions are shown from left to right. A, Data from a neuron with congruent tuning for heading defined by visual and vestibular cues. B, Data from a neuron with opposite heading preferences for visual and vestibular stimuli. C, Data from a neuron with strong tuning for heading defined by optic flow but no vestibular tuning. D, Definitions of azimuth and elevation angles used to define heading stimuli in 3D.
Figure 3.
Figure 3.
Summary of heading tuning in response to inertial motion in the vestibular condition versus complete darkness. A, Distribution of the differences in preferred heading for 14 neurons tested under the standard vestibular condition, with a fixation target, and in complete darkness with no requirement to fixate. The difference in preferred heading was binned according to the cosine of the angle (in accordance with the spherical nature of the data) (Snyder, 1987). B, Scatter plot of HTI values for the same 14 cells tested under both conditions. C, Scatter plot of the maximum response amplitude (Rmax) under both conditions.
Figure 4.
Figure 4.
Relationship between the HTI for the vestibular condition and recording location within MSTd. Recording location was estimated from the polar angle of the underlying MT receptive field, with 90°/–90° corresponding to the upper/lower vertical meridian and 0° to the horizontal meridian in the contralateral visual field. Thus, moving from left to right along the abscissa corresponds approximately to moving from posteromedial (PM) to anterolateral (AL) within MSTd. The thick lines through the data illustrate the running median, using a bin width of 30° and a resolution of 5°. Data are shown separately for the right (filled symbols and black line; n = 153) and left (open symbols and gray line; n = 62) hemispheres.
Figure 5.
Figure 5.
Distributions of 3D heading preferences of MSTd neurons for the vestibular condition (A) and the visual condition (B). Each data point in the scatter plot corresponds to the preferred azimuth (abscissa) and elevation (ordinate) of a single neuron with significant heading tuning (A, n = 162; B, n = 251). The data are plotted on Cartesian axes that represent the Lambert cylindrical equal-area projection of the spherical stimulus space. Histograms along the top and right sides of each scatter plot show the marginal distributions.
Figure 6.
Figure 6.
Comparison of heading selectivity (A–C) and tuning preferences (D–F) of MSTd neurons across stimulus conditions. A, The HTI for the visual condition plotted against the HTI for the vestibular condition. B, HTI for the combined condition versus HTI for the vestibular condition. C, HTI for the combined condition versus HTI for the visual condition. Filled and open circles, Cells with and without significantly different HTI values for the two conditions, respectively (bootstrap; n = 1000; p < 0.05). n = 255 cells. The solid lines indicate the unity-slope diagonal. D–F, Distribution of the difference in preferred heading, Δ Preferred Heading, between the following: D, the visual and vestibular conditions (n = 160); E, the combined versus vestibular conditions (n=156); F, the combined versus visual conditions (n = 239). Note that bins were computed according to the cosine of the angle (in accordance with the spherical nature of the data) (Snyder, 1987). Only neurons with significant heading tuning in each pair of conditions have been included.
Figure 7.
Figure 7.
Scatter plot of the difference in HTI between the combined and visual conditions plotted against the difference in preferred heading, |Δ Preferred Heading|, between the vestibular and visual conditions. Filled and open symbols, Cells with and without significantly different HTI values for the combined and visual conditions, respectively (bootstrap; n = 1000; p<0.05; from Fig. 6C). Solid line, Best linear fit through all data (both open and filled symbols). Gray area highlights neurons with vestibular and visual heading preferences matched to within 45°. Only neurons with significant tuning in both the vestibular and visual conditions are included (n = 160).
Figure 8.
Figure 8.
Quantification of vestibular contribution to the combined response. The vestibular gain, a (Eq. 5), for all 255 MSTd neurons is plotted as a function of the ratio between HTI values for the vestibular and visual conditions. Filled and open symbols denote neurons with and without significant vestibular tuning, respectively. Solid line, Linear regression through all data points.
Figure 9.
Figure 9.
Schematic diagram of a simple two-layer, feedforward neural network model that was trained to compute the head-centered direction of heading from eye-centered visual inputs, head-centered vestibular inputs, and eye-position signals. Hidden units (n = 150) have sigmoidal activation functions, whereas output units are linear.
Figure 10.
Figure 10.
A, B, Example of 3D heading tuning functions for a network hidden unit tested at three horizontal eye positions (from top to bottom, 40° left, 0°, and 40° right) under the vestibular (A) and visual (B) conditions. The format is similar to Figure 2. C, Shift ratio distributions for all 150 hidden units under the two single-cue conditions.
Figure 11.
Figure 11.
Comparison of heading selectivity for hidden layer units across different conditions. Scatter plots of HTI for the visual versus vestibular conditions (A), the combined versus vestibular conditions (B), and the combined versus visual conditions (C). The format is the same as in Figure 6A–C. D, Average ± SEM results from five training sessions. Filled circles, Visual versus vestibular; open circles, combined versus vestibular; triangles, combined versus visual conditions. The stars illustrate the means corresponding to the data in A–C. The lines illustrate the diagonals (unity slope).
Figure 12.
Figure 12.
Distribution of the absolute difference in preferred heading, |Δ Preferred Heading|, for the hidden layer units between the visual and vestibular conditions (A), the combined and vestibular conditions (B), and the combined and visual conditions (C). Data are means ± SD from five training sessions. The format is the same as in Figure 6D–F.

Similar articles

Cited by

References

    1. Albright TD, Desimone R (1987) Local precision of visuotopic organization in the middle temporal area (MT) of the macaque. Exp Brain Res 65: 582–592. - PubMed
    1. Albright TD, Desimone R, Gross CG (1984) Columnar organization of directionally selective cells in visual area MT of the macaque. J Neurophysiol 51: 16–31. - PubMed
    1. Anderson KC, Siegel RM (1999) Optic flow selectivity in the anterior superior temporal polysensory area, STPa, of the behaving monkey. J Neurosci 19: 2681–2692. - PMC - PubMed
    1. Banks MS, Ehrlich SM, Backus BT, Crowell JA (1996) Estimating heading during real and simulated eye movements. Vision Res 36: 431–443. - PubMed
    1. Benson AJ, Spencer MB, Stott JR (1986) Thresholds for the detection of the direction of whole-body, linear movement in the horizontal plane. Aviat Space Environ Med 57: 1088–1096. - PubMed

Publication types

LinkOut - more resources