Deep networks for motor control functions
- PMID: 25852530
- PMCID: PMC4365717
- DOI: 10.3389/fncom.2015.00032
Deep networks for motor control functions
Abstract
The motor system generates time-varying commands to move our limbs and body. Conventional descriptions of motor control and learning rely on dynamical representations of our body's state (forward and inverse models), and control policies that must be integrated forward to generate feedforward time-varying commands; thus these are representations across space, but not time. Here we examine a new approach that directly represents both time-varying commands and the resulting state trajectories with a function; a representation across space and time. Since the output of this function includes time, it necessarily requires more parameters than a typical dynamical model. To avoid the problems of local minima these extra parameters introduce, we exploit recent advances in machine learning to build our function using a stacked autoencoder, or deep network. With initial and target states as inputs, this deep network can be trained to output an accurate temporal profile of the optimal command and state trajectory for a point-to-point reach of a non-linear limb model, even when influenced by varying force fields. In a manner that mirrors motor babble, the network can also teach itself to learn through trial and error. Lastly, we demonstrate how this network can learn to optimize a cost objective. This functional approach to motor control is a sharp departure from the standard dynamical approach, and may offer new insights into the neural implementation of motor control.
Keywords: arm reaches; deep learning; motor control; motor learning; neural networks; optimal control.
Figures
) and time. (C) In theory the output of this model can include time as well, rendering the model an infinite dimensional trajectory function. While the standard description can only represent the command (D) and state (E) at a specific instant in time, a trajectory function represents the continuum of commands and states as its output.
i}mi = 1, finding a series of increasingly lower-dimensional features in the data. (B) A shallow network trains on the map from function inputs,
to low-dimensional features, z. This network is then coupled to the top-half of the stacked autoencoder to construct a function from
, the boundary conditions, to
the discretized trajectory.
Similar articles
-
Dynamical Motor Control Learned with Deep Deterministic Policy Gradient.Comput Intell Neurosci. 2018 Jan 31;2018:8535429. doi: 10.1155/2018/8535429. eCollection 2018. Comput Intell Neurosci. 2018. PMID: 29666634 Free PMC article.
-
Fuzzy neuronal model of motor control inspired by cerebellar pathways to online and gradually learn inverse biomechanical functions in the presence of delay.Biol Cybern. 2017 Dec;111(5-6):421-438. doi: 10.1007/s00422-017-0735-9. Epub 2017 Oct 9. Biol Cybern. 2017. PMID: 28993878
-
Equilibrium point control of a monkey arm simulator by a fast learning tree structured artificial neural network.Biol Cybern. 1993;68(6):499-508. doi: 10.1007/BF00200809. Biol Cybern. 1993. PMID: 8324058
-
Parieto-frontal coding of reaching: an integrated framework.Exp Brain Res. 1999 Dec;129(3):325-46. doi: 10.1007/s002210050902. Exp Brain Res. 1999. PMID: 10591906 Review.
-
Basic concepts of artificial neural network (ANN) modeling and its application in pharmaceutical research.J Pharm Biomed Anal. 2000 Jun;22(5):717-27. doi: 10.1016/s0731-7085(99)00272-1. J Pharm Biomed Anal. 2000. PMID: 10815714 Review.
Cited by
-
Dynamical Motor Control Learned with Deep Deterministic Policy Gradient.Comput Intell Neurosci. 2018 Jan 31;2018:8535429. doi: 10.1155/2018/8535429. eCollection 2018. Comput Intell Neurosci. 2018. PMID: 29666634 Free PMC article.
-
Sensing form - finger gaiting as key to tactile object exploration - a data glove analysis of a prototypical daily task.J Neuroeng Rehabil. 2020 Oct 8;17(1):133. doi: 10.1186/s12984-020-00755-6. J Neuroeng Rehabil. 2020. PMID: 33032615 Free PMC article.
-
Electome network factors: Capturing emotional brain networks related to health and disease.Cell Rep Methods. 2024 Jan 22;4(1):100691. doi: 10.1016/j.crmeth.2023.100691. Epub 2024 Jan 11. Cell Rep Methods. 2024. PMID: 38215761 Free PMC article. Review.
-
Visual feedback of hand and target location does not explain the tendency for straight adapted reaches.PLoS One. 2018 Oct 24;13(10):e0206116. doi: 10.1371/journal.pone.0206116. eCollection 2018. PLoS One. 2018. PMID: 30356285 Free PMC article.
-
Neuroprosthetic Decoder Training as Imitation Learning.PLoS Comput Biol. 2016 May 18;12(5):e1004948. doi: 10.1371/journal.pcbi.1004948. eCollection 2016 May. PLoS Comput Biol. 2016. PMID: 27191387 Free PMC article.
References
-
- Abeles M., Bergman H., Margalit E., Vaadia E. (1993). Spatiotemporal firing patterns in the frontal cortex of behaving monkeys. J. Neurophysiol. 70, 1629–1638. - PubMed
-
- Bengio Y., Lamblin P., Popovici D., Larochelle H. (2007). Greedy layer-wise training of deep networks. Adv. Neural Inf. Process. Syst. 19, 153. - PubMed
Grants and funding
LinkOut - more resources
Full Text Sources
Other Literature Sources
