Learning visual features under motion invariance

Neural Netw. 2020 Jun:126:275-299. doi: 10.1016/j.neunet.2020.03.013. Epub 2020 Mar 20.

Abstract

Humans are continuously exposed to a stream of visual data with a natural temporal structure. However, most successful computer vision algorithms work at image level, completely discarding the precious information carried by motion. In this paper, we claim that processing visual streams naturally leads to formulate the motion invariance principle, which enables the construction of a new theory of learning that originates from variational principles, just like in physics. Such principled approach is well suited for a discussion on a number of interesting questions that arise in vision, and it offers a well-posed computational scheme for the discovery of convolutional filters over the retina. Differently from traditional convolutional networks, which need massive supervision, the proposed theory offers a truly new scenario for the unsupervised processing of video signals, where features are extracted in a multi-layer architecture with motion invariance. While the theory enables the implementation of novel computer vision systems, it also sheds light on the role of information-based principles to drive possible biological solutions.

Keywords: Convolutional networks; Information-based learning; Invariance of visual features; Neural differential equations; Principle of least cognitive action.

MeSH terms

  • Algorithms
  • Animals
  • Databases, Factual
  • Humans
  • Machine Learning*
  • Motion Perception / physiology*
  • Motion*
  • Neural Networks, Computer*
  • Photic Stimulation / methods*
  • Vision, Ocular / physiology