Learning Slowness in a Sparse Model of Invariant Feature Detection

Neural Comput. 2015 Jul;27(7):1496-529. doi: 10.1162/NECO_a_00743. Epub 2015 May 14.

Abstract

Primary visual cortical complex cells are thought to serve as invariant feature detectors and to provide input to higher cortical areas. We propose a single model for learning the connectivity required by complex cells that integrates two factors that have been hypothesized to play a role in the development of invariant feature detectors: temporal slowness and sparsity. This model, the generative adaptive subspace self-organizing map (GASSOM), extends Kohonen's adaptive subspace self-organizing map (ASSOM) with a generative model of the input. Each observation is assumed to be generated by one among many nodes in the network, each being associated with a different subspace in the space of all observations. The generating nodes evolve according to a first-order Markov chain and generate inputs that lie close to the associated subspace. This model differs from prior approaches in that temporal slowness is not an externally imposed criterion to be maximized during learning but, rather, an emergent property of the model structure as it seeks a good model of the input statistics. Unlike the ASSOM, the GASSOM does not require an explicit segmentation of the input training vectors into separate episodes. This enables us to apply this model to an unlabeled naturalistic image sequence generated by a realistic eye movement model. We show that the emergence of temporal slowness within the model improves the invariance of feature detectors trained on this input.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Eye Movements / physiology
  • Humans
  • Learning / physiology*
  • Machine Learning*
  • Markov Chains
  • Models, Neurological*
  • Neurons / physiology
  • Time Factors
  • Visual Cortex / physiology*
  • Visual Perception / physiology*