Modeling VI and VDRL feedback functions: Searching normative rules through computational simulation

J Exp Anal Behav. 2023 Mar;119(2):324-336. doi: 10.1002/jeab.826. Epub 2023 Feb 2.

Abstract

We present the mathematical description of feedback functions of variable interval and variable differential reinforcement of low rates as functions of schedule size only. These results were obtained using an R script named Beak, which was built to simulate rates of behavior interacting with simple schedules of reinforcement. Using Beak, we have simulated data that allow an assessment of different reinforcement feedback functions. This was made with unparalleled precision, as simulations provide huge samples of data and, more importantly, simulated behavior is not changed by the reinforcement it produces. Therefore, we can vary response rates systematically. We've compared different reinforcement feedback functions for random interval schedules, using the following criteria: meaning, precision, parsimony, and generality. Our results indicate that the best feedback function for the random interval schedule was published by Baum (1981). We also propose that the model used by Killeen (1975) is a viable feedback function for the random differential reinforcement of low rates schedule. We argue that Beak paves the way for greater understanding of schedules of reinforcement, addressing still open questions about quantitative features of simple schedules. Also, Beak could guide future experiments that use schedules as theoretical and methodological tools.

Keywords: reinforcement feedback function; simple schedules of reinforcement; simulation; variable differential reinforcement of low rates; variable interval.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Animals
  • Conditioning, Operant*
  • Feedback
  • Mathematics
  • Reinforcement Schedule
  • Reinforcement, Psychology*