The drift diffusion model as the choice rule in reinforcement learning

Psychon Bull Rev. 2017 Aug;24(4):1234-1251. doi: 10.3758/s13423-016-1199-y.


Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyperactivity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups.

Keywords: Bayesian modeling; Decision making; Mathematical models; Reinforcement learning.

MeSH terms

  • Adult
  • Attention Deficit Disorder with Hyperactivity / drug therapy
  • Attention Deficit Disorder with Hyperactivity / physiopathology
  • Bayes Theorem
  • Choice Behavior / physiology*
  • Humans
  • Models, Theoretical*
  • Reinforcement, Psychology*