Mutual benefits: Combining reinforcement learning with sequential sampling models

Neuropsychologia. 2020 Jan;136:107261. doi: 10.1016/j.neuropsychologia.2019.107261. Epub 2019 Nov 14.

Abstract

Reinforcement learning models of error-driven learning and sequential-sampling models of decision making have provided significant insight into the neural basis of a variety of cognitive processes. Until recently, model-based cognitive neuroscience research using both frameworks has evolved separately and independently. Recent efforts have illustrated the complementary nature of both modelling traditions and showed how they can be integrated into a unified theoretical framework, explaining trial-by-trial dependencies in choice behavior as well as response time distributions. Here, we review a theoretical background of integrating the two classes of models, and review recent empirical efforts towards this goal. We furthermore argue that the integration of both modelling traditions provides mutual benefits for both fields, and highlight promises of this approach for cognitive modelling and model-based cognitive neuroscience.

Keywords: Decision-making; Instrumental learning; Reinforcement learning; Sequential sampling models.

Publication types

  • Research Support, Non-U.S. Gov't
  • Review

MeSH terms

  • Cognitive Neuroscience*
  • Decision Making*
  • Humans
  • Models, Biological*
  • Reinforcement, Psychology*