Scaling up liquid state machines to predict over address events from dynamic vision sensors

Bioinspir Biomim. 2017 Sep 1;12(5):055001. doi: 10.1088/1748-3190/aa7663.

Abstract

Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently. In theoretical neuroscience, liquid state machines have been proposed as a biologically inspired method to perform asynchronous prediction without a model. However, they have so far only been demonstrated in simulation or small scale pre-processed camera images. In this paper, we use a liquid state machine to predict over the whole [Formula: see text] event stream provided by a real dynamic vision sensor (DVS, or silicon retina). Thanks to the event-based nature of the DVS, the liquid is constantly fed with data when an object is in motion, fully embracing the asynchronicity of spiking neural networks. We propose a smooth continuous representation of the event stream for the short-term visual prediction task. Moreover, compared to previous works (2002 Neural Comput. 2525 282-93 and Burgsteiner H et al 2007 Appl. Intell. 26 99-109), we scale the input dimensionality that the liquid operates on by two order of magnitudes. We also expose the current limits of our method by running experiments in a challenging environment where multiple objects are in motion. This paper is a step towards integrating biologically inspired algorithms derived in theoretical neuroscience to real world robotic setups. We believe that liquid state machines could complement current prediction algorithms used in robotics, especially when dealing with asynchronous sensors.

Publication types

  • Review

MeSH terms

  • Algorithms*
  • Biomimetic Materials*
  • Biomimetics / instrumentation*
  • Equipment Design
  • Humans
  • Motion Perception*
  • Neural Networks, Computer*
  • Retina
  • Robotics
  • Simulation Training
  • Vision, Ocular*