Learning a sparse code for temporal sequences using STDP and sequence compression

Neural Comput. 2011 Oct;23(10):2567-98. doi: 10.1162/NECO_a_00184. Epub 2011 Jul 6.

Abstract

A spiking neural network that learns temporal sequences is described. A sparse code in which individual neurons represent sequences and subsequences enables multiple sequences to be stored without interference. The network is founded on a model of sequence compression in the hippocampus that is robust to variation in sequence element duration and well suited to learn sequences through spike-timing dependent plasticity (STDP). Three additions to the sequence compression model underlie the sparse representation: synapses connecting the neurons of the network that are subject to STDP, a competitive plasticity rule so that neurons specialize to individual sequences, and neural depolarization after spiking so that neurons have a memory. The response to new sequence elements is determined by the neurons that have responded to the previous subsequence, according to the competitively learned synaptic connections. Numerical simulations show that the model can learn sets of intersecting sequences, presented with widely differing frequencies, with elements of varying duration.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Hippocampus / physiology*
  • Models, Neurological*
  • Neural Networks, Computer*
  • Neurons / physiology*