Dynamic response-by-response models of matching behavior in rhesus monkeys

J Exp Anal Behav. 2005 Nov;84(3):555-79. doi: 10.1901/jeab.2005.110-04.

Abstract

We studied the choice behavior of 2 monkeys in a discrete-trial task with reinforcement contingencies similar to those Herrnstein (1961) used when he described the matching law. In each session, the monkeys experienced blocks of discrete trials at different relative-reinforcer frequencies or magnitudes with unsignalled transitions between the blocks. Steady-state data following adjustment to each transition were well characterized by the generalized matching law; response ratios undermatched reinforcer frequency ratios but matched reinforcer magnitude ratios. We modelled response-by-response behavior with linear models that used past reinforcers as well as past choices to predict the monkeys' choices on each trial. We found that more recently obtained reinforcers more strongly influenced choice behavior. Perhaps surprisingly, we also found that the monkeys' actions were influenced by the pattern of their own past choices. It was necessary to incorporate both past reinforcers and past choices in order to accurately capture steady-state behavior as well as the fluctuations during block transitions and the response-by-response patterns of behavior. Our results suggest that simple reinforcement learning models must account for the effects of past choices to accurately characterize behavior in this task, and that models with these properties provide a conceptual tool for studying how both past reinforcers and past choices are integrated by the neural systems that generate behavior.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Animals
  • Attention
  • Choice Behavior*
  • Color Perception
  • Conditioning, Operant
  • Discrimination Learning*
  • Electrooculography
  • Eye Movements
  • Fixation, Ocular
  • Macaca mulatta
  • Male
  • Motivation*
  • Probability Learning
  • Reinforcement Schedule*
  • Retention, Psychology
  • Signal Processing, Computer-Assisted
  • Visual Fields