Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 May 7;21(9):3240.
doi: 10.3390/s21093240.

Exploring Optimized Spiking Neural Network Architectures for Classification Tasks on Embedded Platforms

Affiliations

Exploring Optimized Spiking Neural Network Architectures for Classification Tasks on Embedded Platforms

Tehreem Syed et al. Sensors (Basel). .

Abstract

In recent times, the usage of modern neuromorphic hardware for brain-inspired SNNs has grown exponentially. In the context of sparse input data, they are undertaking low power consumption for event-based neuromorphic hardware, specifically in the deeper layers. However, using deep ANNs for training spiking models is still considered as a tedious task. Until recently, various ANN to SNN conversion methods in the literature have been proposed to train deep SNN models. Nevertheless, these methods require hundreds to thousands of time-steps for training and still cannot attain good SNN performance. This work proposes a customized model (VGG, ResNet) architecture to train deep convolutional spiking neural networks. In this current study, the training is carried out using deep convolutional spiking neural networks with surrogate gradient descent backpropagation in a customized layer architecture similar to deep artificial neural networks. Moreover, this work also proposes fewer time-steps for training SNNs with surrogate gradient descent. During the training with surrogate gradient descent backpropagation, overfitting problems have been encountered. To overcome these problems, this work refines the SNN based dropout technique with surrogate gradient descent. The proposed customized SNN models achieve good classification results on both private and public datasets. In this work, several experiments have been carried out on an embedded platform (NVIDIA JETSON TX2 board), where the deployment of customized SNN models has been extensively conducted. Performance validations have been carried out in terms of processing time and inference accuracy between PC and embedded platforms, showing that the proposed customized models and training techniques are feasible for achieving a better performance on various datasets such as CIFAR-10, MNIST, SVHN, and private KITTI and Korean License plate dataset.

Keywords: deep convolutional spiking neural networks; embedded platform; spiking neuron model; surrogate gradient descent; time-steps.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
The spike generation mechanism of neurons in the spiking neuron model (LIF). The pre-synaptic neurons or spikes xi(ttk) are controlled by interconnecting synaptic weights (wi) to generate post-synaptic neurons or spikes. The total input values combine into post-synaptic membrane potential (Um) that leaks exponentially with time and with a time constant (τm). If the neuron’s membrane potential of the pre-synaptic neurons crosses a specified threshold (Uth), post-synaptic spikes are generated and then rearranges its membrane potential to the starting value.
Figure 2
Figure 2
Demonstration of convolutional (a) and average-pooling method, (b) above three different time-steps. At a given time interval, the pre-neurons or spikes are combined with convolutional or pooling kernels to measure the input current, which is then combined into the neuron’s membrane potential, Um(t) at each time-step. If the membrane potential of that neuron Um(t) is greater than a specified threshold value Uth, the spikes are produced, and Um(t) goes to its initial value, i.e., 0. Alternatively, over the next time step, Um(t) is assumed to be residual while leakage throughout the current time-step.
Figure 3
Figure 3
Network Architectures: (a) VGG-13 (b) ResNet-6.
Figure 4
Figure 4
Flow diagram of the training process.
Figure 5
Figure 5
Images from the License plate dataset are converted into spikes using Poisson distribution. The upper images show original images, and the below images represent the spike version of original images.
Figure 6
Figure 6
Raster plots of random spike trains taken from the license plate dataset.
Figure 7
Figure 7
Original images of the MNIST dataset are converted into spikes using Poisson distribution. The upper images show original images, and the below images represent the spike version of original images.
Figure 8
Figure 8
Raster plots of random spike trains taken from the MNIST dataset.
Figure 9
Figure 9
Training and Validation curves: (a) MNIST (b) CIFAR-10 (c) KITTI (d) Korean License Plate (e) SVHN.
Figure 10
Figure 10
Inference accuracy performance, along with different DCSNN models and datasets.
Figure 11
Figure 11
Classfication Performance w.r.t time-steps over MNIST, KITTI and License Plate dataset.
Figure 12
Figure 12
Classfication Performance w.r.t time-steps over CIFAR-10 and SVHN dataset.

Similar articles

Cited by

References

    1. Szegedy C., Vanhoucke V., Ioffe S., Shlens J., Wojna Z. Rethinking the inception architecture for computer vision; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Los Alamitos, CA, USA. 27–30 June 2016; pp. 2818–2826.
    1. Girshick R., Donahue J., Darrell T., Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation; Proceedings of the IEEE conference on computer vision and pattern recognition; Columbus, OH, USA. 23–28 June 2014; pp. 580–587.
    1. Senior A., Vanhoucke V., Nguyen P., Sainath T. Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Process. Mag. 2012;29:82–97.
    1. Hassabis D., Kumaran D., Summerfield C., Botvinick M. Neuroscience-inspired artificial intelligence. Neuron. 2017;95:245–258. doi: 10.1016/j.neuron.2017.06.011. - DOI - PubMed
    1. Xu L. An alternative model for mixtures of experts; Proceedings of the Advances in Neural Information Processing Systems 8, NIPS; Denver, CO, USA. 27–30 November 1995; pp. 633–640.

LinkOut - more resources