Learning Structures: Predictive Representations, Replay, and Generalization

Curr Opin Behav Sci. 2020 Apr:32:155-166. doi: 10.1016/j.cobeha.2020.02.017. Epub 2020 May 5.

Abstract

Memory and planning rely on learning the structure of relationships among experiences. Compact representations of these structures guide flexible behavior in humans and animals. A century after 'latent learning' experiments summarized by Tolman, the larger puzzle of cognitive maps remains elusive: how does the brain learn and generalize relational structures? This review focuses on a reinforcement learning (RL) approach to learning compact representations of the structure of states. We review evidence showing that capturing structures as predictive representations updated via replay offers a neurally plausible account of human behavior and the neural representations of predictive cognitive maps. We highlight multi-scale successor representations, prioritized replay, and policy-dependence. These advances call for new directions in studying the entanglement of learning and memory with prediction and planning.

Keywords: Dyna; Reinforcement learning; hierarchical reinforcement learning; hippocampus; memory; model-based; model-free; planning; prediction; prefrontal cortex; prioritized; replay; successor representation.