Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 Jan 7:6:18854.
doi: 10.1038/srep18854.

GeNN: a code generation framework for accelerated brain simulations

Affiliations

GeNN: a code generation framework for accelerated brain simulations

Esin Yavuz et al. Sci Rep. .

Abstract

Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

PubMed Disclaimer

Figures

Figure 1
Figure 1. Workflow in GeNN. The “user-side” code is shown in boxes with green titles, and the files that are controlled by GeNN are shown in boxes with red titles.
Code in red in the user program indicates the functions that are part of the code generated by GeNN. Simulating a neuronal network in GeNN starts with a modelDefinition() (top) which feeds into both, the meta-compiler generateAll.cc (1, middle left) and the “user-side” simulation code (4, right). The meta-compiler generates a source code library (2, bottom left) which can then be used in “user-side” simulation code (3, bottom right).
Figure 2
Figure 2. Connectivity schemes in GeNN.
(a) Connectivity in an example network. (b) YALE format sparse representation of the network shown in (a). For the formula image pre-synaptic neuron, formula image gives the index of the starting point in the arrays that store the postsynaptic neuron index formula image, and other variables, e.g. formula image. The formula image pre-synaptic neuron makes formula image connections with the postsynaptic population. The index of the formula image postsynaptic neuron that is connected to formula image pre-synaptic neuron is stored in formula image, and a synapse variable for the pre-synaptic and post-synaptic neuron pair are stored in formula image. (c) Dense representation for the same network. n stands for number of elements, formula image is number of pre-synaptic neurons, formula image is number of post-synaptic neurons, formula image is the total number of connections in the synapse population.
Figure 3
Figure 3. Execution speed of pulse-coupled Izhikevich neuron network simulations in different spiking regimes.
(a–c) Spiking activity of 800 excitatory (Exc) and 200 inhibitory (Inh) neurons in the quiet (a), balanced (b), and irregular spiking regimes (c). (d,f,h) Simulation speed compared to real-time for the networks in (ac), respectively, on different hardware, using single (sp) and double (dp) floating point precision, for 128,000 neurons (red bars) or maximum number of neurons possible (grey bars, corresponding network size indicated below the dp bar). Data is based on 6 trials for GPUs and 2 trials for CPUs. (e,g,i) Maximum speedup achieved compared to the CPU, as in (d,f,h). (j) Detailed graph of wall-clock time for the simulation of 5 simulated seconds in the balanced regime as a function of different network sizes, using single floating point precision. The red vertical line indicates the simulation time for 128,000 neurons that was used in making the red bars in (f). Real-time is shown by the dashed horizontal line. (k) Throughput (delivered spikes per sec) per neuron for the conditions shown in (j). (l,m,n) Proportion of time spent on different kernels for simulations of 128,000 neurons in the regimes in (a–c) respectively.
Figure 4
Figure 4. Execution speed of the insect olfaction model simulations using sparse and dense connectivity.
(a) Spiking activity of a network of 100 PN, 20 LHI, 1000 KC and 100 DN. (b,d) Simulation speed compared to real-time using dense (b) and sparse (d) connectivity patterns on different hardware, using single (sp) and double (dp) floating point precision, for 128,000 neurons (red bars) and 1,024,000 neurons (grey bars, N/A for K2000M dp due to memory constraints). Data is based on 7 trials for the GPUs and 1 trial for CPUs. (c,e) Maximum speedup achieved compared to the CPU, as in (b,d). (f) Detailed graph of wall-clock time for simulation of 5 simulated seconds as a function of different network sizes, using single floating point precision. Red vertical line represents simulation time for 128,000 neurons used in making the red bars, and grey line represents 1,024,000 neurons used for making the grey lines in (b,d). Real-time is shown by the dashed horizontal line. (g) Throughput (delivered spikes per sec) per neuron for the conditions shown in (f). (h,i) Proportion of time spent on different kernels for simulation of 128,000 neurons using dense (h) and sparse (i) connectivity.

Similar articles

Cited by

References

    1. Khan M. M. et al. SpiNNaker: mapping neural networks onto a massively-parallel chip multiprocessor. In IEEE International Joint Conference on Neural Networks (IJCNN-WCCI), 2849–2856 (IEEE, 2008).
    1. Schemmel J. et al. A wafer-scale neuromorphic hardware system for large-scale neural modeling. In Proceedings of the 2010 IEEE International Symposium on Circuits and Systems (ISCAS), 1947–1950 (IEEE, 2010).
    1. Seo J.-s. et al. A 45nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons. In Custom Integrated Circuits Conference (CICC), 2011 IEEE, 1–4 (IEEE, 2011).
    1. Davison A. P. et al. PyNN: a common interface for neuronal network simulators. Frontiers in Neuroinformatics 2 (2009). - PMC - PubMed
    1. Gleeson P. et al. NeuroML: a language for describing data driven models of neurons and networks with a high degree of biological detail. PLoS computational biology 6, e1000815 (2010). - PMC - PubMed

Publication types