Model-Based Reasoning in Humans Becomes Automatic with Training

PLoS Comput Biol. 2015 Sep 17;11(9):e1004463. doi: 10.1371/journal.pcbi.1004463. eCollection 2015 Sep.

Abstract

Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adolescent
  • Adult
  • Algorithms
  • Computational Biology
  • Decision Making / physiology*
  • Female
  • Humans
  • Male
  • Models, Biological
  • Reinforcement, Psychology*
  • Reward*
  • Task Performance and Analysis
  • Young Adult