Identification of key ionic channel contributors to the overall dynamics of a neuron is an important problem in experimental neuroscience. Such a problem is challenging since even in the best cases, identification relies on noisy recordings of membrane potential only, and strict inversion to the constituent channel dynamics is mathematically ill-posed. In this work, we develop a biophysically interpretable, learning-based strategy for data-driven inference of neuronal dynamics. In particular, we propose two optimization frameworks to learn and approximate neural dynamics from an observed voltage trajectory. In both the proposed strategies, the membrane potential dynamics are approximated as a weighted sum of ionic currents. In the first strategy, the ionic currents are represented using voltage dependent channel conductances and membrane potential in a parametric form, while in the second strategy, the currents are represented as a linear combination of generic basis functions. A library of channel activation/inactivation and time-constant curves describing prototypical channel kinetics are used to provide estimates of the channel variables to approximate the ionic currents. Finally, a linear optimization problem is solved to infer the weights/scaling variables in the membrane-potential dynamics. In the first strategy, the weights can be used to recover the channel conductances, and the reversal potentials while in the second strategy, using the estimated weights, active channels can be inferred and the trajectory of the gating variables are recovered, allowing for biophysically salient inference. Our results suggest that the complex nonlinear behavior of the neural dynamics over a range of temporal scales can be efficiently inferred in a data-driven manner from noisy membrane potential recordings.
Keywords: Conductance based model; Identification; Learning.