A Unifying Probabilistic View of Associative Learning

PLoS Comput Biol. 2015 Nov 4;11(11):e1004567. doi: 10.1371/journal.pcbi.1004567. eCollection 2015 Nov.

Abstract

Two important ideas about associative learning have emerged in recent decades: (1) Animals are Bayesian learners, tracking their uncertainty about associations; and (2) animals acquire long-term reward predictions through reinforcement learning. Both of these ideas are normative, in the sense that they are derived from rational design principles. They are also descriptive, capturing a wide range of empirical phenomena that troubled earlier theories. This article describes a unifying framework encompassing Bayesian and reinforcement learning theories of associative learning. Each perspective captures a different aspect of associative learning, and their synthesis offers insight into phenomena that neither perspective can explain on its own.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Animals
  • Association Learning / physiology*
  • Bayes Theorem
  • Computational Biology / methods*
  • Humans
  • Models, Neurological*

Grants and funding

This research was supported by startup funds from Harvard University. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.