Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2011;6(9):e22885.
doi: 10.1371/journal.pone.0022885. Epub 2011 Sep 27.

Fine-tuning and the stability of recurrent neural networks

Affiliations

Fine-tuning and the stability of recurrent neural networks

David MacNeil et al. PLoS One. 2011.

Abstract

A central criticism of standard theoretical approaches to constructing stable, recurrent model networks is that the synaptic connection weights need to be finely-tuned. This criticism is severe because proposed rules for learning these weights have been shown to have various limitations to their biological plausibility. Hence it is unlikely that such rules are used to continuously fine-tune the network in vivo. We describe a learning rule that is able to tune synaptic weights in a biologically plausible manner. We demonstrate and test this rule in the context of the oculomotor integrator, showing that only known neural signals are needed to tune the weights. We demonstrate that the rule appropriately accounts for a wide variety of experimental results, and is robust under several kinds of perturbation. Furthermore, we show that the rule is able to achieve stability as good as or better than that provided by the linearly optimal weights often used in recurrent models of the integrator. Finally, we discuss how this rule can be generalized to tune a wide variety of recurrent attractor networks, such as those found in head direction and path integration systems, suggesting that it may be used to tune a wide variety of stable neural systems.

PubMed Disclaimer

Conflict of interest statement

Competing Interests: The authors have declared that no competing interests exist.

Figures

Figure 1
Figure 1. Model neurons used in the network.
a) The dynamics of a model neuron coupled to a PSC model provides the complete model of a single cell. Spikes arrive, are filtered by a weighted post-synaptic current and then drive a spiking nonlinearity. b) Tuning curves for 40 simulated goldfish neurons with a cellular membrane time constant, formula image, of formula image ms and a refractory period of formula image ms. Maximum firing rates were picked from an even distribution ranging from 20 to 100 Hz. Direction intercepts were picked from an even distribution between −50 and 50 degrees. The neurons were evenly split between positive and negative gains, determined by a randomly assigned encoding weight formula image.
Figure 2
Figure 2. Two methods for filtering saccade commands.
a) Eye position for a series of saccades. b) The saccade velocity, based on a). c) Filtering based on magnitude. This method uses Equation 15 to filter the velocity profile. This is the method adopted for all subsequent experiments. d) Filtering based on a change in position, where a change in position greater than 5 degree allows the subsequent velocity commands to pass through at a magnitude inversely proportional to the time elapsed after a movement.
Figure 3
Figure 3. Transfer functions of actual versus represented eye position for tuned, damped and unstable networks.
Eye position is normalized to lie on a range of formula image. An exact integrator has a slope of 1, a damped integrator has a slope less than 1, and an unstable integrator has a slope greater than 1. Compare to Figure 9b.
Figure 4
Figure 4. Bar graphs for the experiments described in the main text.
a) RMSE and b) the magnitude of formula image for each experiment. The error bars indicate the 95% confidence intervals as reported in Table 1.
Figure 5
Figure 5. Generated eye movements of example networks.
The linear Optimal, Noisy (30% perturbation to connection weights), and Learned+Perturbformula image (after 1200 s of learning from the Noisy state) networks are shown for 30 s with the same saccade regime.
Figure 6
Figure 6. A comparison of the exact integrator, linear Optimal and Noisy transfer functions over a normalized range.
The linear Optimal network is closer to the exact integrator over the range of eye positions. Although deviations of the Noisy network from the exact integrator are small, the effects on stability are highly significant (see Table 1 and Figure 5). Magnified regions are to aid visual comparison.
Figure 7
Figure 7. Comparison of goldfish integrator neurons from electrophysiological recordings and the simulation after tuning with the learning rule.
A single raw recording is shown on the left, along with the corresponding eye trace. Arrows indicate times of saccade (black right, grey left; adapted from [30]). The right shows 14 neurons randomly selected from the model population after tuning with the learning rule. Neurons in the model have similar kinds of responses as the example neuron. One is highlighted in grey.
Figure 8
Figure 8. A comparison of the simulated detuning experiments with experimental data .
The top trace is for the control situation, which for the model is tuning after a 30% perturbation and 5% continuous noise. The middle trace shows the unstable integrator, and the bottom trace shows the damped integrator. The goldfish traces are from animals that had longer training times (6 h and 16.5 h respectively), than the model (20 min). Both the model and experiment demonstrate increased detuning with longer training times (not shown), and both show the expected detuning (drift away from midline for the unstable case, and drift towards midline in the damped case).
Figure 9
Figure 9. A comparison of the Learned+Perturb+Noise, Unstable and Damped transfer functions.
The slope of the Unstable network is greater than 1 and that of the Damped network is less than 1. The re-tuned networks demonstrate the expected drifting behavior (see Figure 8 and Table 1).
Figure 10
Figure 10. Performance of the integrator before and after lesioning the network.
Severe drift is evident after randomly removing one of the 40 neurons. After 1200 s of recovery with the learning rule under 5% noise, the time constant improves back to pre-lesioning levels.
Figure 11
Figure 11. Tuning curves in two function spaces.
a) Gaussian-like tuning curves of 20 example neurons in a one-dimensional function space (7-dimensional vector space). These are tunings representative of neurons in a head-direction ring attractor network. b) Multi-dimensional Gaussian-like tuning curves of four example neurons in a two-dimensional function space (14-dimensional vector space). These are tunings representative of neurons in a subicular path integration network.
Figure 12
Figure 12. Simulations of tuning attractor networks in higher dimensional spaces.
a) The input (dashed line) along with the final position of the representation after 500 ms of drift for pre-training (thick line) and post-training (thin line). b) The pre-training drift in the vector space over 500 ms at the beginning of the simulation for the bump (thick line in a). d) The drift in the vector space over 500 ms after 1200 s of training in the simulation (thin line in a). Comparing similar vector dimensions between b) and c) demonstrates a slowing of the drift. d) A 2D bump in the function space for the simulated time shown in e), after training. e) The vector drift in the 14-dimensional space over 500 ms after training.

Similar articles

Cited by

References

    1. Robinson D. Integrating with neurons. Annual Review of Neuroscience. 1989;12:33–45. - PubMed
    1. Seung HS. How the brain keeps the eyes still. Proceedings of the National Academy of Sciences of the United States of America. 1996;93:13339–13344. - PMC - PubMed
    1. Pouget A, Zhang K, Deneve S, Latham PE. Statistically efficient estimation using population coding. Neural Computation. 1998;10:373–401. - PubMed
    1. Goodridge JP, Touretzky DS. Modeling attractor deformation in the rodent headdirection system. Journal of Neurophysiology. 2000;83:3402–3410. - PubMed
    1. Redish AD, Elga AN, Touretzky DS. A coupled attractor model of the rodent head direction system. Network: Computation in Neural Systems. 1996;7:671–685.

Publication types