Comparing continual task learning in minds and machines
- PMID: 30322916
- PMCID: PMC6217400
- DOI: 10.1073/pnas.1800755115
Comparing continual task learning in minds and machines
Abstract
Humans can learn to perform multiple tasks in succession over the lifespan ("continual" learning), whereas current machine learning systems fail. Here, we investigated the cognitive mechanisms that permit successful continual learning in humans and harnessed our behavioral findings for neural network design. Humans categorized naturalistic images of trees according to one of two orthogonal task rules that were learned by trial and error. Training regimes that focused on individual rules for prolonged periods (blocked training) improved human performance on a later test involving randomly interleaved rules, compared with control regimes that trained in an interleaved fashion. Analysis of human error patterns suggested that blocked training encouraged humans to form "factorized" representation that optimally segregated the tasks, especially for those individuals with a strong prior bias to represent the stimulus space in a well-structured way. By contrast, standard supervised deep neural networks trained on the same tasks suffered catastrophic forgetting under blocked training, due to representational interference in the deeper layers. However, augmenting deep networks with an unsupervised generative model that allowed it to first learn a good embedding of the stimulus space (similar to that observed in humans) reduced catastrophic forgetting under blocked training. Building artificial agents that first learn a model of the world may be one promising route to solving continual task performance in artificial intelligence research.
Keywords: catastrophic forgetting; categorization; continual learning; representational similarity analysis; task factorization.
Copyright © 2018 the Author(s). Published by PNAS.
Conflict of interest statement
The authors declare no conflict of interest.
Figures
Similar articles
-
Continual task learning in natural and artificial agents.Trends Neurosci. 2023 Mar;46(3):199-210. doi: 10.1016/j.tins.2022.12.006. Epub 2023 Jan 20. Trends Neurosci. 2023. PMID: 36682991 Free PMC article. Review.
-
Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization.Proc Natl Acad Sci U S A. 2018 Oct 30;115(44):E10467-E10475. doi: 10.1073/pnas.1803839115. Epub 2018 Oct 12. Proc Natl Acad Sci U S A. 2018. PMID: 30315147 Free PMC article.
-
Modelling continual learning in humans with Hebbian context gating and exponentially decaying task signals.PLoS Comput Biol. 2023 Jan 19;19(1):e1010808. doi: 10.1371/journal.pcbi.1010808. eCollection 2023 Jan. PLoS Comput Biol. 2023. PMID: 36656823 Free PMC article.
-
Neural modularity helps organisms evolve to learn new skills without forgetting old skills.PLoS Comput Biol. 2015 Apr 2;11(4):e1004128. doi: 10.1371/journal.pcbi.1004128. eCollection 2015 Apr. PLoS Comput Biol. 2015. PMID: 25837826 Free PMC article.
-
Contributions by metaplasticity to solving the Catastrophic Forgetting Problem.Trends Neurosci. 2022 Sep;45(9):656-666. doi: 10.1016/j.tins.2022.06.002. Epub 2022 Jul 4. Trends Neurosci. 2022. PMID: 35798611 Review.
Cited by
-
Lapses in perceptual decisions reflect exploration.Elife. 2021 Jan 11;10:e55490. doi: 10.7554/eLife.55490. Elife. 2021. PMID: 33427198 Free PMC article.
-
Computational evidence for hierarchically structured reinforcement learning in humans.Proc Natl Acad Sci U S A. 2020 Nov 24;117(47):29381-29389. doi: 10.1073/pnas.1912330117. Proc Natl Acad Sci U S A. 2020. PMID: 33229518 Free PMC article.
-
Bayesian regression explains how human participants handle parameter uncertainty.PLoS Comput Biol. 2020 May 18;16(5):e1007886. doi: 10.1371/journal.pcbi.1007886. eCollection 2020 May. PLoS Comput Biol. 2020. PMID: 32421708 Free PMC article.
-
Dissociable Neural Representations of Adversarially Perturbed Images in Convolutional Neural Networks and the Human Brain.Front Neuroinform. 2021 Aug 5;15:677925. doi: 10.3389/fninf.2021.677925. eCollection 2021. Front Neuroinform. 2021. PMID: 34421567 Free PMC article.
-
Continual task learning in natural and artificial agents.Trends Neurosci. 2023 Mar;46(3):199-210. doi: 10.1016/j.tins.2022.12.006. Epub 2023 Jan 20. Trends Neurosci. 2023. PMID: 36682991 Free PMC article. Review.
References
-
- Legg S, Hutter M. 2007. A collection of definitions of intelligence. arXiv:10.1207/s15327051hci0301_2. Preprint, posted June 25, 2007.
-
- Parisi GI, Kemker R, Part JL, Kanan C, Wermter S. 2018. Continual lifelong learning with neural networks: A review. arXiv:1802.07569v2. Preprint, posted February 21, 2018. - PubMed
-
- French RM. Catastrophic forgetting in connectionist networks. Trends Cogn Sci. 1999;3:128–135. - PubMed
-
- Mnih V, et al. Human-level control through deep reinforcement learning. Nature. 2015;518:529–533. - PubMed
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
