Sequential Bayesian decoding with a population of neurons

Neural Comput. 2003 May;15(5):993-1012. doi: 10.1162/089976603765202631.

Abstract

Population coding is a simplified model of distributed information processing in the brain. This study investigates the performance and implementation of a sequential Bayesian decoding (SBD) paradigm in the framework of population coding. In the first step of decoding, when no prior knowledge is available, maximum likelihood inference is used; the result forms the prior knowledge of stimulus for the second step of decoding. Estimates are propagated sequentially to apply maximum a posteriori (MAP) decoding in which prior knowledge for any step is taken from estimates from the previous step. Not only do we analyze the performance of SBD, obtaining the optimal form of prior knowledge that achieves the best estimation result, but we also investigate its possible biological realization, in the sense that all operations are performed by the dynamics of a recurrent network. In order to achieve MAP, a crucial point is to identify a mechanism that propagates prior knowledge. We find that this could be achieved by short-term adaptation of network weights according to the Hebbian learning rule. Simulation results on both constant and time-varying stimulus support the analysis.

MeSH terms

  • Bayes Theorem
  • Brain / cytology
  • Brain / physiology*
  • Models, Neurological*
  • Neurons / physiology*