Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Sep 27;17(9):e1009434.
doi: 10.1371/journal.pcbi.1009434. eCollection 2021 Sep.

A model of head direction and landmark coding in complex environments

Affiliations

A model of head direction and landmark coding in complex environments

Yijia Yan et al. PLoS Comput Biol. .

Abstract

Environmental information is required to stabilize estimates of head direction (HD) based on angular path integration. However, it is unclear how this happens in real-world (visually complex) environments. We present a computational model of how visual feedback can stabilize HD information in environments that contain multiple cues of varying stability and directional specificity. We show how combinations of feature-specific visual inputs can generate a stable unimodal landmark bearing signal, even in the presence of multiple cues and ambiguous directional specificity. This signal is associated with the retrosplenial HD signal (inherited from thalamic HD cells) and conveys feedback to the subcortical HD circuitry. The model predicts neurons with a unimodal encoding of the egocentric orientation of the array of landmarks, rather than any one particular landmark. The relationship between these abstract landmark bearing neurons and head direction cells is reminiscent of the relationship between place cells and grid cells. Their unimodal encoding is formed from visual inputs via a modified version of Oja's Subspace Algorithm. The rule allows the landmark bearing signal to disconnect from directionally unstable or ephemeral cues, incorporate newly added stable cues, support orientation across many different environments (high memory capacity), and is consistent with recent empirical findings on bidirectional HD firing reported in the retrosplenial cortex. Our account of visual feedback for HD stabilization provides a novel perspective on neural mechanisms of spatial navigation within richer sensory environments, and makes experimentally testable predictions.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. The basic structure of the HD system with two-stage visual landmark processing.
(A) The systems-level network structure of the HD system which contains two pathways, processing visual landmarks (purple) and HD self-motion inputs (orange) respectively. The two pathways meet in dRSC (pink) where the integrated signal is generated and projected back to the HD attractor in order to correct for drift and keep a global real-time sense of direction. (B) Visual layers consisting of neurons tuned to specific features at specific egocentric directions. Different colors (red, blue, and green) stand for different features. Left: a circular environment; N, E, S, and W represent the allocentric directional frame. Middle: numerical degrees represent the egocentric directional frame (with 0 indicating ahead), here shown for the red and blue feature sensitive neurons. Right: Firing-rate responses across populations of feature-specific neurons. (C) The agent’s HD in simulation is taken from data-foraging rat. Top: Allocentric trajectory at the 11th minute of a 20-minute trajectory. Bottom: The corresponding angular velocity at the 11th minute. (D) The unimodal encoding of aLB cells when the agent is facing east (top) and then west (bottom). Two different aLB cells encode sceneries from different facing directions despite a large overlap in perceptual features (red cues). Darker filled circles indicate stronger activation of aLB cells, whilst darker lines indicate stronger feed-forward connections and inhibitory connections among aLB cells. The dashed line (bottom) indicates an example of a connection that has been depressed when the agent was facing East, so that the updated position of the cue in the visual field will not drive a previously active aLB neuron. Abbreviations: HD: head-direction(al); aLB: abstract landmark bearing; dRSC/gRSC: dysgranular/granular retrosplenial cortex; Vis.: visual; mOSA: modified Oja’s Subspace Algorithm; LI: lateral self-inhibition; Vel.: (angular) velocity.
Fig 2
Fig 2. aLB cells express a unimodal representation of landmark bearing via modified OSA.
(A) The landmark configuration and feature-specific V1 input signals (same as Fig 1B) for this simulation. Each curve corresponds to the visual cue with the same ‘color’. (B) Sorted weights to aLB cells from visual cortical cells responding to the ‘red’ feature (left) or the ‘blue’ feature (right). The left figure indicates that aLB cells learn a bimodal profile with ‘red’ visual cells due to the conflicting ‘red’ cue. Warmer colors represent stronger synaptic connections. V1 cells are labelled by their own preferred egocentric directions. aLB cells are sorted according to which V1 cells they receive the maximum weight from. (C) The convergence of synaptic weights during learning. The asterisk stands for the two-norm (Euclidean distance) of the difference, while the circular marks stand for the Pearson correlation coefficient, between the vectors of current and final weight matrices. (D) The global representation of aLB cells (labelled by positive numbers on the y-axis) shows sparsity and unimodality during the testing phase (left). Sorting aLB cells according to the HD with maximum firing rate correlation (right) shows that aLB cells have a unimodal firing profile and cover the entire range of egocentric bearings, despite not following any one visual cue alone. Warmer colors represent higher firing rates. Abbreviations: HD: head direction; aLB: abstract landmark bearing; ego.: egocentric; vis.: visual; f: firing rate.
Fig 3
Fig 3. aLB cells exhibit robustness against unstable or ephemeral cues, and can incorporate novel cues.
(A) Single environment (same as Fig 1B) with the ‘blue’ cue moving along the cylinder with 90 deg/s anticlockwise during training (left). Global representation of aLB cells tested on the unstable scenery with ‘blue’ cue moving by 90 deg/s anticlockwise (middle, εaLB = 0) and the stabilised scenery with ‘blue’ cue fixed after learning has stopped (right, εaLB = 0.5). (B) Single environment (same as Fig 1B) with the ‘blue’ cue teleporting to a random position on the cylinder every 10 seconds (left). Global representation of aLB cells tested on the unstable scenery with ‘blue’ cue teleporting every 10 seconds (middle, εaLB = 0) and the stabilised scenery with ‘blue’ cue fixed after learning has stopped (right, εaLB = 0.5). (C) Two sceneries (within the same environment) with the ‘red-blue’ scenery (Sc. I) and a ‘red-blue-green’ scenery (Sc. II). The agent stays in each scenery for 400 seconds (once from Sc. I to Sc. II and back to Sc. I), with feature-specific visual input signals in Sc. II (bottom). (D) Following the learning phase, the model is tested on the ‘red-blue’ scenery (i.e. Sc. I, top), the ‘red-blue-green’ scenery (i.e. Sc. II, middle), and the ‘green’ scenery (i.e. Sc. II with ‘red-blue’ scenery excluded, bottom). Stable unimodal aLB representations emerge in all cases (also see S3 Fig), with different sets of active cells in Sc. I vs. Sc. II (S4 Fig). Warmer colors represent higher firing rates of aLB cells. Abbreviations: HD: head direction; aLB: abstract landmark bearing; ego.: egocentric.
Fig 4
Fig 4. aLB cells show high encoding capacity across multiple environments.
(A) The agent is exposed to 10 environments sequentially (top) with feature-specific visual input signals due East in Env. I (bottom). Sceneries contain ‘red’ and ‘blue’ cues, common to all environments, and a ‘green’ cue, at an orientation relative to the other two cues that change between environments. (B) Global representations of aLB cells with local weights tested on corresponding sceneries. Titles for each plot refer to the corresponding environment in (A), along with the total number of highly activated aLB cells in brackets. The ’intermediate’ plots provide aLB cell activity based on weights after the first exposure in each individual environment. The ‘final’ plots provide aLB cell activity based on weights after the learning in all 10 environments is complete. See Fig 2D for illustrations. Warmer colors represent higher firing rates. (C) IoU similarity maps measuring the similarity of two sets of aLB cell firing patterns when tested on specific sceneries. Axes refer to environments, with 1 as the earliest (i.e. Env. I). The left column provides IoU maps based on ‘intermediate’ weights. The middle column provides IoU map based on ‘final’ weights. The right column provides the IoU index (y-axis) between the first exposure and the end of learning tested on each scenery (x-axis). Green colors represent higher IoU, of which the maximum is 1. Abbreviations: HD: head direction; Env.: Environment; Ego.: Egocentric; aLB: abstract landmark bearing; IoU: Intersection over Union; f: firing rate.
Fig 5
Fig 5. Mirrored environments with the simulation result in dRSC signals.
(A) Illustration of the connected environments with mirrored and rotated complex sceneries. The agent repeatedly crosses from one environment to the other, spending 60 seconds in each environment at a time, for the duration of the whole 20-minute learning phase. (B) The learnt connection weights form the bimodality of dRSC responses after the whole learning phase. (C) Global representations of aLB cells (left, with a single cell example) and dRSC cells (right, with a single cell example) after learning, tested in alternating environments (top, row; 10 seconds in each environment) or in a single environment (Env. I, bottom row). Note that aLB cells are unimodal within a given environment (bottom left), contrary to WC-BD cells. aLB cells are labelled as positive numbers, whilst other cells are labelled from -180 to 179 according to their own initial preferred directions (y-axis). The x-axis stands for HD. Abbreviations: aLB: abstract landmark bearing; gRSC/dRSC: granular/dysgranular retrosplenial cortex; HD: head direction; f: firing rate.
Fig 6
Fig 6. dRSC signals stabilize global subcortical HD signals for the simulation in Fig 5.
(A) Simulation in darkness (left, no visual inputs resulting in no feedback from dRSC) shows systematic drift. The HD representation (left, red curves) and its angular difference from actual HD (right, black curves with actual HD subtracted from HD representations) are given over the whole 20-minute learning time. Red dashed lines indicate an ideal error-free HD representation. (B) Simulations of alternating exploration over the multi-environment case in Fig 5A showing stable HD representation (left), as well as global representations of gRSC and HD cells after alternating exploration showing no firing field drift (right). Red dashed lines stand for the center of initial HD firing fields. See the caption of Fig 5 for details. Abbreviations: HD: head-directional/head direction; Accu.: accumulated; Rep.: representation; Diff.: difference; gRSC: granular retrosplenial cortex; f: firing rate.
Fig 7
Fig 7. Testing scenery-based HD encoding at a sensory level with the two-environment scheme.
(A) Sceneries in the first environment (Env. I) are all the same as Fig 2A, yet are rotated by 120° anticlockwise in the second environment (Env. II). Simulations are conducted within 20 minutes, assuming the agent explores each environment for 10 minutes. (B) Global representations of dRSC cells (top) and HD cells (bottom) tested on Env. II. Simulations are conducted via our two-stage model with aLB cells. (C) Global representations of dRSC cells (top) and HD cells (bottom) tested on Env. II. Simulations are conducted via the alternative model, shows an opposite direction of firing field drift from (B). This model does not contain aLB cells but rather a direct visual to dRSC connection (subject to classical Hebbian learning). (D) Simulations over the whole 20-minute learning time via our two-stage model with aLB cells (left) and the alternative model without aLB cells (right), showing opposite directions of HD drift. Abbreviations: HD: head direction; Diff.: difference; dRSC: dysgranular retrosplenial cortex; f: firing rate.
Fig 8
Fig 8. The retrieval utility of aLB cells under multi-environmental exploration similar to Fig 4.
Simulations are conducted via our two-stage model (left, with aLB cells) and the alternative model (right, without aLB cells but rather a direct visual to dRSC Hebbian connection). (A) Global representations of dRSC cells (top) and HD cells (bottom) tested on the scenery in the final environment (Env. X). Red dashed lines stand for the center of initial firing fields. See the caption of Fig 4 for more details. (B) The difference between the actual HD and the HD representation over the whole 20-minute learning time. (C) Global representations of dRSC cells (top) and HD cells (bottom) tested on the sceneries in the first environment (Env. I), referring to new firing fields of dRSC and HD cells for retrieving the earliest scenery after learning across environments is complete. Abbreviations: Env.: Environment; dRSC: dysgranular retrosplenial cortex; HD: head direction; Diff.: difference; f: firing rate.

Similar articles

Cited by

References

    1. Taube JS, Muller RU, Ranck JBJ. Head-direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis. J Neurosci. 1990; 10(2):420–35. doi: 10.1523/JNEUROSCI.10-02-00420.1990 - DOI - PMC - PubMed
    1. Goodridge JP, Dudchenko PA, Worboys KA, Golob EJ, Taube JS. Cue control and head direction cells. Behav Neurosci. 1998; 112(4):749–61. doi: 10.1037//0735-7044.112.4.749 - DOI - PubMed
    1. Taube JS. The Head Direction Signal: Origins and Sensory-Motor Integration. Annu Rev Neurosci. 2007; 30(1):181–207. doi: 10.1146/annurev.neuro.29.051605.112854 - DOI - PubMed
    1. Welberg L. The place of head-direction cells. Nat Rev Neurosci. 2014; 15(3):136–136. doi: 10.1038/nrn3693 - DOI
    1. Doeller CF, Barry C, Burgess N. Evidence for grid cells in a human memory network. Nature. 2010; 463(7281):657–61. doi: 10.1038/nature08704 - DOI - PMC - PubMed

Publication types