Synapse cell optimization and back-propagation algorithm implementation in a domain wall synapse based crossbar neural network for scalable on-chip learning

Nanotechnology. 2020 Sep 4;31(36):364004. doi: 10.1088/1361-6528/ab967d. Epub 2020 May 26.

Abstract

On-chip learning in spin orbit torque driven domain wall synapse based crossbar fully connected neural network (FCNN) has been shown to be extremely efficient in terms of speed and energy, when compared to training on a conventional computing unit or even on a crossbar FCNN based on other non-volatile memory devices. However there are issues with respect to scalability of the on-chip learning scheme in the domain wall synapse based FCNN. Unless the scheme is scalable, it will not be competitive with respect to training a neural network on a conventional computing unit for real applications. In this paper, we have proposed a modification in the standard gradient descent algorithm, used for training such FCNN, by including appropriate thresholding units. This leads to optimization of the synapse cell at each intersection of the crossbars and makes the system scalable. In order for the system to approximate a wide range of functions for data classification, hidden layers must be present and the backpropagation algorithm (extension of gradient descent algorithm for multi-layered FCNN) for training must be implemented on hardware. We have carried this out in this paper by employing an extra crossbar. Through a combination of micromagnetic simulations and SPICE circuit simulations, we hence show highly improved accuracy for domain wall syanpse based FCNN with a hidden layer compared to that without a hidden layer for different machine learning datasets.

MeSH terms

  • Algorithms
  • Deep Learning
  • Lab-On-A-Chip Devices
  • Neural Networks, Computer*
  • Pattern Recognition, Automated