The overfitted brain: Dreams evolved to assist generalization

Patterns (N Y). 2021 May 14;2(5):100244. doi: 10.1016/j.patter.2021.100244.


Understanding of the evolved biological function of sleep has advanced considerably in the past decade. However, no equivalent understanding of dreams has emerged. Contemporary neuroscientific theories often view dreams as epiphenomena, and many of the proposals for their biological function are contradicted by the phenomenology of dreams themselves. Now, the recent advent of deep neural networks (DNNs) has finally provided the novel conceptual framework within which to understand the evolved function of dreams. Notably, all DNNs face the issue of overfitting as they learn, which is when performance on one dataset increases but the network's performance fails to generalize (often measured by the divergence of performance on training versus testing datasets). This ubiquitous problem in DNNs is often solved by modelers via "noise injections" in the form of noisy or corrupted inputs. The goal of this paper is to argue that the brain faces a similar challenge of overfitting and that nightly dreams evolved to combat the brain's overfitting during its daily learning. That is, dreams are a biological mechanism for increasing generalizability via the creation of corrupted sensory inputs from stochastic activity across the hierarchy of neural structures. Sleep loss, specifically dream loss, leads to an overfitted brain that can still memorize and learn but fails to generalize appropriately. Herein this "overfitted brain hypothesis" is explicitly developed and then compared and contrasted with existing contemporary neuroscientific theories of dreams. Existing evidence for the hypothesis is surveyed within both neuroscience and deep learning, and a set of testable predictions is put forward that can be pursued both in vivo and in silico.

Keywords: deep learning; dreams; learning; neuroscience.

Publication types

  • Review