Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2015 Mar 19:9:32.
doi: 10.3389/fncom.2015.00032. eCollection 2015.

Deep networks for motor control functions

Affiliations

Deep networks for motor control functions

Max Berniker et al. Front Comput Neurosci. .

Abstract

The motor system generates time-varying commands to move our limbs and body. Conventional descriptions of motor control and learning rely on dynamical representations of our body's state (forward and inverse models), and control policies that must be integrated forward to generate feedforward time-varying commands; thus these are representations across space, but not time. Here we examine a new approach that directly represents both time-varying commands and the resulting state trajectories with a function; a representation across space and time. Since the output of this function includes time, it necessarily requires more parameters than a typical dynamical model. To avoid the problems of local minima these extra parameters introduce, we exploit recent advances in machine learning to build our function using a stacked autoencoder, or deep network. With initial and target states as inputs, this deep network can be trained to output an accurate temporal profile of the optimal command and state trajectory for a point-to-point reach of a non-linear limb model, even when influenced by varying force fields. In a manner that mirrors motor babble, the network can also teach itself to learn through trial and error. Lastly, we demonstrate how this network can learn to optimize a cost objective. This functional approach to motor control is a sharp departure from the standard dynamical approach, and may offer new insights into the neural implementation of motor control.

Keywords: arm reaches; deep learning; motor control; motor learning; neural networks; optimal control.

PubMed Disclaimer

Figures

Figure 1
Figure 1
(A) The standard description of motor control is dynamical and must be integrated forward in time to generate commands and state estimates. (B) An alternative is to represent the (integrated) solution for the system. In this case the system is represented with an algebraic model that is a function of the cost parameters, boundary conditions, etc…(formula image) and time. (C) In theory the output of this model can include time as well, rendering the model an infinite dimensional trajectory function. While the standard description can only represent the command (D) and state (E) at a specific instant in time, a trajectory function represents the continuum of commands and states as its output.
Figure 2
Figure 2
(A) A stacked autoencoder is trained on high-dimensional data, {formula imagei}mi = 1, finding a series of increasingly lower-dimensional features in the data. (B) A shallow network trains on the map from function inputs, formula image to low-dimensional features, z. This network is then coupled to the top-half of the stacked autoencoder to construct a function from formula image, the boundary conditions, to formula image the discretized trajectory.
Figure 3
Figure 3
Approximate optimal trajectory function test results. Displayed are the optimal reaches (black lines), estimated reaches (gray dashed lines) and the actual reaches resulting from the function's outputted command (blue reaches). All reaches start at the red circles and the red dashed lines display the endpoint error. Test reaches made in a counter-clockwise curl field (A), a null field (B) and a clockwise curl field (C), using PCA model. (D–F) The deep networks results under the same conditions.
Figure 4
Figure 4
(A) Examples from the initial round of training's validation data, wherein reaches were generated using random small amplitude sinusoidal commands. These small torques resulted in small displacements. Displayed are the self-generated training reaches (black lines), estimated reaches (gray dashed lines) and the actual reaches resulting from the function's outputted command (blue reaches). (B) Validation data on the third round of training show that the reaches are very close to the desired target state. (C) Reaches on the test data demonstrate the network has taught itself to reach to the desired target, but does so with commands that are very different from what is optimal (see right panel for examples).
Figure 5
Figure 5
Reaches to the test targets after being optimized. Displayed are the optimal reaches (black lines), estimated reaches (gray dashed lines) and the actual reaches resulting from the function's outputted command (blue reaches). After being optimized, the commands are much closer to what is optimal (see right panel for examples).

Similar articles

Cited by

References

    1. Abeles M., Bergman H., Margalit E., Vaadia E. (1993). Spatiotemporal firing patterns in the frontal cortex of behaving monkeys. J. Neurophysiol. 70, 1629–1638. - PubMed
    1. Averbeck B. B., Chafee M. V., Crowe D. A., Georgopoulos A. P. (2002). Parallel processing of serial movements in prefrontal cortex. Proc. Natl. Acad. Sci. U.S.A. 99, 13172–13177. 10.1073/pnas.162485599 - DOI - PMC - PubMed
    1. Bengio Y., Lamblin P., Popovici D., Larochelle H. (2007). Greedy layer-wise training of deep networks. Adv. Neural Inf. Process. Syst. 19, 153. - PubMed
    1. Berniker M., Kording K. (2008). Estimating the sources of motor errors for adaptation and generalization. Nat. Neurosci. 11, 1454–1461. 10.1038/nn.2229 - DOI - PMC - PubMed
    1. Berniker M., Kording K. P. (2011). Estimating the relevance of world disturbances to explain savings, interference and long-term motor adaptation effects. PLoS Comput. Biol. 7:e1002210. 10.1371/journal.pcbi.1002210 - DOI - PMC - PubMed