In everyday life, humans often encounter complex environments in which multiple sources of information can influence their decisions. We propose that in such situations, people select and apply different strategies representing different cognitive models of the decision problem. Learning advances by evaluating the success of using a strategy and eventually by switching between strategies. To test our strategy selection model, we investigated how humans solve a dynamic learning task with complex auditory and visual information, and assessed the underlying neural mechanisms with functional magnetic resonance imaging. Using the model, we were able to capture participants' choices and to successfully attribute expected values and reward prediction errors to activations in the dopaminoceptive system (e.g., ventral striatum [VS]) as well as decision conflict to signals in the anterior cingulate cortex. The model outperformed an alternative approach that did not update decision strategies, but the relevance of information itself. Activation of sensory areas depended on whether the selected strategy made use of the respective source of information. Selection of a strategy also determined how value-related information influenced effective connectivity between sensory systems and the VS. Our results suggest that humans can structure their search for and use of relevant information by adaptively selecting between decision strategies.
Keywords: model-based fMRI; reinforcement learning; reward; ventral striatum.
© The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: firstname.lastname@example.org.