Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 Aug 17;11(8):e0161335.
doi: 10.1371/journal.pone.0161335. eCollection 2016.

Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding

Affiliations

Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding

Brian Gardner et al. PLoS One. .

Abstract

Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.

PubMed Disclaimer

Conflict of interest statement

Competing Interests: The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Illustration of the postsynaptic kernels used in this analysis, and an example of a resulting postsynaptic membrane potential.
(A) The time course of the postsynaptic current kernel α. (B) The PSP kernel ϵ. (C) The reset kernel κ. (D) The resulting membrane potential ui as defined by Eq (1). In this example, a single presynaptic spike is received at tj = 0 ms, and a postsynaptic spike is generated at ti = 4 ms from selectively tuning both the synaptic weight wij and firing threshold ϑ values. We take C = 2.5 nF for the neuron’s membrane capacitance, such that the postsynaptic current attains a maximum value of 1nA.
Fig 2
Fig 2. Dependence of synaptic weight change Δw on the relative timing difference between a target postsynaptic spike and input presynaptic spike: tref and tpre, respectively.
(A) Leaning window of the INST rule. (B) Learning window of the FILT rule. The peak Δw values for INST and FILT correspond to relative timings of just under 7 and 3 ms, respectively. Both panels show the weight change in the absence of an actual postsynaptic spike.
Fig 3
Fig 3. Phase portraits of the INST and FILT synaptic plasticity rules for a single synapse, each plotting the change in the synaptic weight Δw as a function of its current strength relative to threshold w/ϑ.
In this example, a postsynaptic neuron receives an input spike at time tpre = 0 ms from a single synapse with weight w. The postsynaptic neuron must learn to match a target output spike time tref = 4 ms, which corresponds to a desired synaptic weight solution w* as indicated in both panels. The actual output spike fired by the neuron is shifted backwards in time for positive Δw, and vice versa for negative Δw. The horizontal arrows in each panel show the direction in which w evolves, and are separated by the vertical dashed lines. The peak PSP value ϵpeak = 1 mV (see Methods) results in an actual output spike being fired for w/ϑ ≥ 1.
Fig 4
Fig 4. The minimum target output firing time t˜imin, relative to an input spike time, that can accurately be learned using the FILT rule, plotted as a function of the filter time constant τq.
This figure makes predictions based on a single synapse with an input spike at 0ms. At τq = 0 ms the minimum time t˜imin is equivalent to speak, that is the lag time corresponding to the maximum value of the PSP kernel, and FILT becomes equivalent to INST. As a reference, the value τq = 10 ms was selected for use in our computer simulations, which was indicated to give optimal performance on preliminary runs.
Fig 5
Fig 5. Two postsynaptic neurons trained under the proposed synaptic plasticity rules, that learned to map between a single, fixed input spike pattern and a four-spike target output train.
(A) A spike raster of an arbitrarily generated input pattern, lasting 200ms, where each dot represents a spike. (B) Actual output spike rasters corresponding to the INST rule (left) and the FILT rule (right) in response to the repeated presentation of the input pattern. Target output spike times are indicated by crosses. (C) The evolution of the vRD for each learning rule, taken as a moving average over 40 independent simulation runs. The shaded regions show the standard deviation.
Fig 6
Fig 6. Averaged synaptic weight values before and after network training, corresponding to the same experiment of Fig 5.
The input synaptic weight values are plotted in chronological order, with respect to their associated firing time. (A) The distribution of weights before learning. (B) Post training under the INST rule. (C) Post training under the the FILT rule. The gold coloured vertical lines indicate the target postsynaptic firing times. Note the different scales of A, B and C. Results were averaged based on 40 independent runs. The design of this figure is inspired from [9].
Fig 7
Fig 7. The vRD as a function of the learning rate η for each learning rule.
The E-learning CHRON rule of [11] is included as a benchmark for the INST and FILT rules. In every instance, a network containing 200 presynaptic neurons and a single postsynaptic neuron was tasked with mapping 10 arbitrary input patterns to the same target output spike with a timing of 100ms. Learning took place over 500 epochs, and results were averaged over 40 independent runs. In this case, error bars show the standard error of the mean rather than the standard deviation: the vRD was subject to very high variance for large η values, therefore we considered just its average value and not its distribution.
Fig 8
Fig 8. The classification performance of each learning rule as a function of the number of input patterns when learning to classify p patterns into five separate classes.
Each input class was identified using a single, unique target output spike timing, which a single postsynaptic neuron had to learn to match to within 1ms. Left: The averaged classification performance 〈Pc〉 for a network containing ni = 200, 400 and 600 presynaptic neurons. Right: The corresponding number of epochs taken by the network to reach a performance level of 90%. More than 500 epochs was considered a failure by the network to learn all the patterns at the required performance level. Results were averaged over 20 independent runs, and error bars show the standard deviation.
Fig 9
Fig 9. The memory capacity αm of each learning rule as a function of the required output spike timing precision.
The network contained a single postsynaptic neuron, and was trained to classify input patterns into five separate classes within 500 epochs. Memory capacity values were determined based on networks containing ni = 200, 400 and 600 presynaptic neurons. Results were averaged over 20 independent runs.
Fig 10
Fig 10. The classification performance of each learning rule as a function of the number of target output spikes used to identify input patterns.
The network was tasked when classifying 10 input patterns into 5 separate classes. Correct classifications were considered when the number of actual output spikes fired by a single postsynaptic neuron matched that of its target, and each actual spike fell within 1ms of its corresponding target timing. In this case, a network containing 200 presynaptic neurons was trained over an extended 1000 epochs to allow for decreased learning speed, and results were averaged over 20 independent runs.

Similar articles

Cited by

References

    1. van Rullen R, Guyonneau R, Thorpe SJ. Spike times make sense. Trends in Neurosciences. 2005;28(1):1–4. 10.1016/j.tins.2004.10.010 - DOI - PubMed
    1. Gollisch T, Meister M. Rapid neural coding in the retina with relative spike latencies. Science. 2008;319(5866):1108–1111. 10.1126/science.1149639 - DOI - PubMed
    1. Johansson RS, Birznieks I. First spikes in ensembles of human tactile afferents code complex spatial fingertip events. Nature Neuroscience. 2004;7(2):170–177. 10.1038/nn1177 - DOI - PubMed
    1. Mainen ZF, Sejnowski TJ. Reliability of spike timing in neocortical neurons. Science. 1995;268(5216):1503–1506. 10.1126/science.7770778 - DOI - PubMed
    1. Reich DS, Victor JD, Knight BW, Ozaki T, Kaplan E. Response variability and timing precision of neuronal spike trains in vivo. Journal of Neurophysiology. 1997;77(5):2836–2841. - PubMed

Grants and funding

BG was supported by the Engineering and Physical Sciences Research Council (grant no. EP/J500562/1, http://www.epsrc.ac.uk/). AG and BG were supported by the European Community’s Seventh Framework Programme (grant no. 604102, The Human Brain Project, http://cordis.europa.eu/fp7/). AG and BG were supported by Horizon 2020 (grant no. 284941, The Human Brain Project, https://ec.europa.eu/programmes/horizon2020/).