Forgetting in Reinforcement Learning Links Sustained Dopamine Signals to Motivation

PLoS Comput Biol. 2016 Oct 13;12(10):e1005145. doi: 10.1371/journal.pcbi.1005145. eCollection 2016 Oct.


It has been suggested that dopamine (DA) represents reward-prediction-error (RPE) defined in reinforcement learning and therefore DA responds to unpredicted but not predicted reward. However, recent studies have found DA response sustained towards predictable reward in tasks involving self-paced behavior, and suggested that this response represents a motivational signal. We have previously shown that RPE can sustain if there is decay/forgetting of learned-values, which can be implemented as decay of synaptic strengths storing learned-values. This account, however, did not explain the suggested link between tonic/sustained DA and motivation. In the present work, we explored the motivational effects of the value-decay in self-paced approach behavior, modeled as a series of 'Go' or 'No-Go' selections towards a goal. Through simulations, we found that the value-decay can enhance motivation, specifically, facilitate fast goal-reaching, albeit counterintuitively. Mathematical analyses revealed that underlying potential mechanisms are twofold: (1) decay-induced sustained RPE creates a gradient of 'Go' values towards a goal, and (2) value-contrasts between 'Go' and 'No-Go' are generated because while chosen values are continually updated, unchosen values simply decay. Our model provides potential explanations for the key experimental findings that suggest DA's roles in motivation: (i) slowdown of behavior by post-training blockade of DA signaling, (ii) observations that DA blockade severely impairs effortful actions to obtain rewards while largely sparing seeking of easily obtainable rewards, and (iii) relationships between the reward amount, the level of motivation reflected in the speed of behavior, and the average level of DA. These results indicate that reinforcement learning with value-decay, or forgetting, provides a parsimonious mechanistic account for the DA's roles in value-learning and motivation. Our results also suggest that when biological systems for value-learning are active even though learning has apparently converged, the systems might be in a state of dynamic equilibrium, where learning and forgetting are balanced.

MeSH terms

  • Computer Simulation
  • Corpus Striatum / physiology*
  • Decision Making / physiology
  • Dopamine / metabolism*
  • Dopaminergic Neurons / physiology
  • Humans
  • Mental Recall / physiology*
  • Models, Neurological*
  • Motivation / physiology*
  • Reinforcement, Psychology*


  • Dopamine

Grant support

This work was supported by Grant-in-Aid for Scientific Research (No. 15H05876, No. 26120710) of the Ministry of Education, Culture, Sports, Science and Technology in Japan ( and Strategic Japanese - German Cooperative Programme on “Computational Neuroscience” (project title: neural circuit mechanisms of reinforcement learning) of the Japan Agency for Medical Research and Development ( to KM. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.