Deep belief networks learn context dependent behavior

PLoS One. 2014 Mar 26;9(3):e93250. doi: 10.1371/journal.pone.0093250. eCollection 2014.

Abstract

With the goal of understanding behavioral mechanisms of generalization, we analyzed the ability of neural networks to generalize across context. We modeled a behavioral task where the correct responses to a set of specific sensory stimuli varied systematically across different contexts. The correct response depended on the stimulus (A,B,C,D) and context quadrant (1,2,3,4). The possible 16 stimulus-context combinations were associated with one of two responses (X,Y), one of which was correct for half of the combinations. The correct responses varied symmetrically across contexts. This allowed responses to previously unseen stimuli (probe stimuli) to be generalized from stimuli that had been presented previously. By testing the simulation on two or more stimuli that the network had never seen in a particular context, we could test whether the correct response on the novel stimuli could be generated based on knowledge of the correct responses in other contexts. We tested this generalization capability with a Deep Belief Network (DBN), Multi-Layer Perceptron (MLP) network, and the combination of a DBN with a linear perceptron (LP). Overall, the combination of the DBN and LP had the highest success rate for generalization.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Behavior
  • Culture
  • Humans
  • Models, Psychological*
  • Nerve Net
  • Reinforcement, Psychology