Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Mar 10;16(3):e1007725.
doi: 10.1371/journal.pcbi.1007725. eCollection 2020 Mar.

Estimation of neural network model parameters from local field potentials (LFPs)

Affiliations
Free PMC article

Estimation of neural network model parameters from local field potentials (LFPs)

Jan-Eirik W Skaar et al. PLoS Comput Biol. .
Free PMC article

Abstract

Most modeling in systems neuroscience has been descriptive where neural representations such as 'receptive fields', have been found by statistically correlating neural activity to sensory input. In the traditional physics approach to modelling, hypotheses are represented by mechanistic models based on the underlying building blocks of the system, and candidate models are validated by comparing with experiments. Until now validation of mechanistic cortical network models has been based on comparison with neuronal spikes, found from the high-frequency part of extracellular electrical potentials. In this computational study we investigated to what extent the low-frequency part of the signal, the local field potential (LFP), can be used to validate and infer properties of mechanistic cortical network models. In particular, we asked the question whether the LFP can be used to accurately estimate synaptic connection weights in the underlying network. We considered the thoroughly analysed Brunel network comprising an excitatory and an inhibitory population of recurrently connected integrate-and-fire (LIF) neurons. This model exhibits a high diversity of spiking network dynamics depending on the values of only three network parameters. The LFP generated by the network was computed using a hybrid scheme where spikes computed from the point-neuron network were replayed on biophysically detailed multicompartmental neurons. We assessed how accurately the three model parameters could be estimated from power spectra of stationary 'background' LFP signals by application of convolutional neural nets (CNNs). All network parameters could be very accurately estimated, suggesting that LFPs indeed can be used for network model validation.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Overview of hybrid scheme for computing local field potentials (LFPs).
Top row: First, the dynamics of a network is simulated using a point-neuron simulation (A), and the resulting spike times are saved to file. Orange and blue color indicate excitatory and inhibitory neurons, respectively. In a separate simulation, the obtained spike times are replayed as synaptic input currents onto reconstructed neuron morphologies representing postsynaptic target neurons (B, only one excitatory in orange and one inhibitory neuron in blue are shown). Based on the resulting transmembrane currents of the postsynaptic target neurons in this second simulation, the LFP is calculated (C). Bottom row: Prediction of LFPs from population firing histograms. Instead of running the full hybrid scheme, the LFP can be predicted by the convolution of the population firing histograms (lower figure in A) with kernels representing the average contribution to the LFP by a single spike in each population (lower figure in B). These kernels are computed using the hybrid scheme [29], see Methods.
Fig 2
Fig 2. Brunel model network and phase diagram.
A, Illustration of network. Solid lines represent excitatory connections, dashed lines inhibitory connections.B, Phase diagram, adapted from [28], Fig 2A. Different network states arise depending on the parameters η = νext/νthr and g (where in the present example a fixed synaptic delay td of 1.5 ms is used). SR stands for synchronous regular, SI for synchronous irregular, and AI asynchronous irregular. Orange box shows the extent of parameters we simulated and blue box when we restricted the simulations to the AI state. Note that this plot shows a slice of the parameter space for a given value of J = 0.1. We considered different values of J in the study, so the actual parameter space is a cube, with the third axis being in the J-direction. The red dots labeled A–E indicate the η and g values of the example activities shown in the first figure in Results.
Fig 3
Fig 3. Illustration of convolutional neural network (CNN).
The PSDs of all six LFP channels are taken as input. The three convolutional layers consist of 20 filters each, and are followed by max pooling. Two fully connected layers precede the output layer which consists of 3 nodes, one for each parameter.
Fig 4
Fig 4
A, Test loss as a function of number of training epochs of the CNN for different simulation lengths. B, Minimal loss (that is, smallest loss in panel A) as a function of simulation length. A function with 1/t shape was fitted to the data to illustrate that the scaling is explained by limited amount of data and that the error decreases as more data is used. The R2 score was 0.994.
Fig 5
Fig 5. Examples of simulated spiking network activity and LFPs for different sets of network parameters (η, g and J).
For each simulation, AE, the first row shows spike trains from 100 randomly selected neurons across both populations. The second and third row show the population firing rate (including both the excitatory and inhibitory neurons) and its power spectral density (PSD). The final two rows show the LFP signal from all six channels and the PSD of channel 1, respectively. The dashed red lines in the lowest panel shows the LFP PSD computed from spikes in individual neurons (Eq 9) rather than with the presently used population firing-rate approach (Eq 10, black lines) which is computationally much less demanding. In general, the agreement is seen to be very high, the only discrepancy is seen for the SR-state example where the height of the peak around 300 Hz differs. The network states for the five examples (SR/SI(fast)/SI(slow)/AI, see text) are indicated at the top.
Fig 6
Fig 6. Statistical measures of network activity for different combinations of network parameters (η, g and J).
A, Average population firing rates, that is, average firing rate over all neurons and times. The red dots show the parameter values of the examples in Fig 5. B, Mean coefficient of variation (Eq (12)) of the inter-spike intervals over all neurons as a measure of the spiking irregularity. C, Square root of the variance of the LFP signal integrated over time for the topmost channel (channel 1). This measure corresponds to the square root of integral of the power spectrum of the LFP over all frequencies [33], and is referred to as the standard deviation of the LFP (LFP STD). D, LFP Entropy, cf. Eq 13.
Fig 7
Fig 7. Accuracy of network parameter estimation.
A, Estimation error distributions for η, g and J averaged over the entire parameter space. In the plots all parameter ranges were rescaled to the interval [0, 1] for easier comparison on the lower x-axis, the upper x-axis shows the original values. The vertical line indicates the mean of both distributions. The orange curve shows the result when using the full parameter set (η ∈ [0.8, 4], g ∈ [3.5, 8] and J ∈ [0.05, 0.4]) and the blue curve when the parameter set only contains the AI state (η ∈ [1.5, 3], g ∈ [4.5, 6] and J ∈ [0.1, 0.25]). The purple line gives the estimation error of the CNN trained for the full parameter set, but evaluated on the restricted parameter set containing the AI state only. To compare the full parameter data set and the AI-only data set, they were both scaled to the range of the full parameter set. Table 7 shows the bias and standard deviation for each of the data sets and estimated parameters. B, Cumulative error distributions, the proportion of absolute errors that fall below a given value, also with all parameters rescaled to [0, 1]. This can be understood as the fraction of the data points which are reconstructed better than a specific error. The dashed black lines indicate the 90% coverage interval.
Fig 8
Fig 8. Mean absolute prediction errors.
Each voxel in the panels shows the error on the test dataset averaged across the parameter ranges, defined by the pixel size of the grid and the value of J indicated above. A–C, the network trained on the full parameter space. D–F, the network trained on the restricted parameter space. The region of this parameter space is indicated by the red boxes in panels A–C.
Fig 9
Fig 9. Parameter estimation errors for a single versus multiple CNNs.
Comparison of the parameter estimation error orange, when (i) a single CNN is trained to optimise all three parameters η, g and J simultaneously (combined predictions) orange, with (ii) three CNNs each trained to estimate a single parameter (single predictions). All parameters were rescaled to the interval [0, 1]. The bias and standard deviation of the estimators are listed in Table 8.
Fig 10
Fig 10. Grid-sampled vs. randomly sampled training data.
The plots show error distributions for CNNs trained on data randomly sampled from the parameter set (blue) and from the same amount of training data taken from a regular grid (yellow). All parameters were rescaled to the interval [0, 1]. The bias and standard deviation of the estimators are listed in Table 8.
Fig 11
Fig 11. Robustness of estimates with Gaussian spread of model parameters in test data.
A–C, estimation errors for η, g, and J, respectively, when the synaptic time delay td is randomly distributed when generating test data LFPs. td has a truncated Gaussian distribution around the fixed value td = 1.5 ms used when generating training data, and results for different values of the standard deviation σ are shown (note logarithmic scale). The Gaussian distributions are truncated at 0.2 and 2.8 ms. Results for the average estimation errors across both the full parameter space and the restricted AI parameter spaces are shown. D–F, same as A–C when the neuron membrane time constant τm instead is randomly distributed around the training-data value τm = 20 ms when generating test data LFPs. The Gaussian distributions are truncated at 2 and 38 ms. G–I, same as A–C when the neuronal firing threshold θ instead is randomly distributed around the training-data value θ = 20 mV when generating test data LFPs. The Gaussian distributions are truncated at 12 and 28 mV. J–L, same as A–C when the refractory period tref instead is randomly distributed around the training-data value tref = 2.0 ms when generating test data LFPs. The Gaussian distributions are truncated at 0.2 and 3.8 ms. M, Illustration of probability density function (pdf) of truncated Gaussian parameter distributions used in panels A–L.
Fig 12
Fig 12. Robustness of estimates with shifts of model parameters in test data.
A–C, estimation errors for η, g, and J, respectively, when the synaptic time delay td is shifted when generating test data LFPs. td = 1.5 ms is used when generating training data. Result for the average estimation errors across both the full parameter space and the restricted AI parameter spaces are shown. D–F, same as A–C when the neuron membrane time constant τm instead is different than the value τm = 20 ms used for generating training data. The vertical bars indicate the x-value used for generating the training data. G–I, same as A–C when the neuronal firing threshold θ instead is different from the value θ = 20 mV used for generating training data. J–L, same as A–C when the refractory period tref instead is different from the value tref = 2.0 ms used for generating training data.

Similar articles

Cited by

References

    1. Dayan P, Abbott LF. Theoretical neuroscience. MIT Press, Cambridge; 2001.
    1. Einevoll GT, Destexhe A, Diesmann M, Grün S, Jirsa V, de Kamps M, et al. The Scientific Case for Brain Simulations. Neuron. 2019;102:735–744. 10.1016/j.neuron.2019.03.027 - DOI - PubMed
    1. Hodgkin AL, Huxley AF. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of physiology. 1952;117:500–544. 10.1113/jphysiol.1952.sp004764 - DOI - PMC - PubMed
    1. Koch C. Biophysics of Computation. Oxford Univ Press, Oxford; 1999.
    1. Sterratt D, Graham B, Gillies A, Willshaw D. Principles of computational modelling in neuroscience. Cambridge University Press; 2011.

Publication types

MeSH terms

Grants and funding

AJS, EH and GTE received funding from the Research Council of Norway (DigiBrain 248828, CoBra 250128), https://www.forskningsradet.no/. TVN and GTE received funding from European Union’s Horizon 2020 Framework Programme for Research and Innovation under Grant Agreements No. 720270 (Human Brain Project SGA1), No. 785907 (Human Brain Project SGA2), https://ec.europa.eu/programmes/horizon2020/. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

LinkOut - more resources