Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Aug 17;7(4):ENEURO.0038-20.2020.
doi: 10.1523/ENEURO.0038-20.2020. Print 2020 Jul/Aug.

DeepCINAC: A Deep-Learning-Based Python Toolbox for Inferring Calcium Imaging Neuronal Activity Based on Movie Visualization

Affiliations

DeepCINAC: A Deep-Learning-Based Python Toolbox for Inferring Calcium Imaging Neuronal Activity Based on Movie Visualization

Julien Denis et al. eNeuro. .

Abstract

Two-photon calcium imaging is now widely used to infer neuronal dynamics from changes in fluorescence of an indicator. However, state-of-the-art computational tools are not optimized for the reliable detection of fluorescence transients from highly synchronous neurons located in densely packed regions such as the CA1 pyramidal layer of the hippocampus during early postnatal stages of development. Indeed, the latest analytical tools often lack proper benchmark measurements. To meet this challenge, we first developed a graphical user interface (GUI) allowing for a precise manual detection of all calcium transients from imaged neurons based on the visualization of the calcium imaging movie. Then, we analyzed movies from mouse pups using a convolutional neural network (CNN) with an attention process and a bidirectional long-short term memory (LSTM) network. This method is able to reach human performance and offers a better F1 score (harmonic mean of sensitivity and precision) than CaImAn to infer neural activity in the developing CA1 without any user intervention. It also enables automatically identifying activity originating from GABAergic neurons. Overall, DeepCINAC offers a simple, fast and flexible open-source toolbox for processing a wide variety of calcium imaging datasets while providing the tools to evaluate its performance.

Keywords: CNN; LSTM; calcium imaging; deep learning; hippocampus; neuronal activity.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Experimental paradigm. A, Experimental timeline. B, Intraventricular injection of GCaMP6s on pups (drawing) done at P0. C, Schematic representing the cranial window surgery. D, top left, Imaged field of view. Scale bar: 100 µm. Top right, Activity of five random neurons in the field of view (variation of fluorescence is expressed as Δf/f). Scale bar: 50 s. Bottom, Drawing of a head fixed pup under the microscope.
Figure 2.
Figure 2.
Examples of different uses of the GUI. The GUI can be used for data exploration (A1, A2), to establish the ground truth (B) and to evaluate DeepCINAC predictions (C). A, The GUI can be used to explore the activity inference from any methods. The spikes inferred from CaImAn are represented by the green marks at the bottom. The GUI allows the user to play the movie at the time of the selected transient and visualize the transients and source profile of the cell of interest. A1, Movie visualization and correlation between transient and source profiles allow the classification of the first selected transient as true positive (TP) and the second selected transient as false positive (FP). A2, Movie visualization and correlation between transient and source profiles allow the classification of the selected transient as false negative (FN). B, The GUI can be used to establish a ground truth. In this condition, it offers the user the possibility to manually annotate onset and peak of calcium transient. Onsets are represented by vertical dashed blue lines, peaks by green dots. C, When the activity inference is done using DeepCINAC, the GUI allows the display of the classifier predictions. The prediction is represented by the red line. The dashed horizontal red line is a probability of one. The blue area represents time periods during which the probability is above a given threshold, in this example 0.5. T: transient profile, S: source profile, Corr: correlation, FOV: field of view.
Figure 3.
Figure 3.
Workflow to establish the ground truth. First, a cell was randomly chosen in the imaged field of view. 1, All putative transients of the segment to label were identified for the onset to the peak of each calcium event. 2, Three human experts [“expert” (A), “expert” (B), “expert” (C)] independently annotated the segment. Among all putative transients, each human expert had to decide whether it was in his opinion a true transient. 3, The combination of the labeling lead to “consensual transients” (i.e., true transient for each human expert; black square) and to “non-consensual transients” (i.e., true transient for at least one human expert but not all of them; open square). 4, All non-consensual transients were discussed and ground truth was established.
Figure 4.
Figure 4.
Architecture of DeepCINAC neural network. As a first step, for each set of inputs of the same cell, we extract CNNs features of video frames that we pass to an attention mechanism and feed the outputs into a forward pass network (FU, green units) and a backward pass network (BU, orange units), representing a bidirectional LSTM. Another bidirectional LSTM is fed from the attention mechanism and previous bidirectional LSTM outputs. A LSTM (MU, blue units) then integrates the outputs from the process of the three types of inputs to generate a final video representation. A sigmoid activation function is finally used to produce a probability for the cell to be active at each given frame given as input.
Figure 5.
Figure 5.
DeepCINAC step by step workflow. A, Schematic of two-photon imaging experiment. B, Screenshot of DeepCINAC GUI used to explore and annotate data. C, The GUI produces .cinac files that contain the necessary data to train or benchmark a classifier. D, Schematic representation of the architecture of the model that will be used to train the classifier and predict neuronal activity. E, Training of the classifier using the previously defined model. F, Schematic of a raster plot resulting from the inference of the neuronal activity using the trained classifier. G, Evaluation of the classifier performance using precision, sensitivity and F1 score. H, Active learning pipeline: screenshots of the GUI used to identify edge cases where the classifier wrongly infers the neuronal activity and annotate new data on similar situations to add data for a new classifier training.
Figure 6.
Figure 6.
Validation of visual ground truth and deep learning approach. A, Boxplots showing sensitivity for the three human experts (R.F.D., J.D., M.A.P.) and CINAC_v6 evaluated against the known ground truth from four cells from the GENIE project. B, Boxplots showing precision for the three human experts (R.F.D., J.D., M.A.P.) and CINAC_v6 evaluated against the known ground truth from four cells from the GENIE project. C, Boxplots showing F1 score for the three human experts (R.F.D., J.D., M.A.P.) and CINAC_v6 evaluated against the known ground truth from four cells from the GENIE project. Each colored dot represents a cell. Cell labels in the legend correspond to session identifiers from the dataset. CINAC_v6 is a classifier trained on data from the GENIE project and the Hippo-dvt dataset (Table 1; Extended Data Table 1-1).
Figure 7.
Figure 7.
Evaluation of CINAC_v1 performance on Hippo-dvt dataset. A, Boxplots showing sensitivity for the three human experts (R.F.D., J.D., M.A.P.), CaImAn and CINAC_v1 evaluated against the visual ground truth of 25 cells. A total of 15 cells were annotated by J.D. and R.F.D., six by M.A.P. B, Boxplots showing precision for the three human experts (R.F.D., J.D., M.A.P.), CaImAn and CINAC_v1 evaluated against the visual ground truth of 25 cells. A total of 15 cells were annotated by J.D. and R.F.D., six by M.A.P. C, Boxplots showing F1 score for the three human experts (R.F.D., J.D., M.A.P.), CaImAn and CINAC_v1 evaluated against the visual ground truth of 25 cells. A total of 15 cells were annotated by J.D. and R.F.D., six by M.A.P. Each colored dot represents a cell, the number inside indicates the cell’s id and each color represents a session as identified in the legend. CINAC_v1 is a classifier trained on data from the Hippo-dvt dataset (Table 1; Extended Data Table 1-1). Figure 7 is supported by Extended Data Figures 7-1, 7-2. *p < 0.05.
Figure 8.
Figure 8.
Use of DeepCINAC classifiers to optimize performances on various dataset. A, Boxplot displaying the sensitivity (top panel), precision (middle panel) and F1 score (bottom panel) for Hippo-GECO dataset. For each panel, we evaluated CaImAn performance as well as two different versions of CINAC (v1 and v3). CINAC_v1 is a classifier trained on data from the Hippo-dvt dataset and CINAC_v3 is a classifier trained on data from the Hippo-GECO dataset (Table 1; Extended Data Table 1-1). B, Boxplot displaying the sensitivity (top panel), precision (middle panel) and F1 score (bottom panel) for Hippo-6m dataset. For each panel, we evaluated CaImAn performance as well as two different versions of CINAC (v1 and v4). CINAC_v1 is a classifier trained on data from the Hippo-dvt dataset and CINAC_v4 is a classifier trained on data from the Hippo-dvt, Hippo-6m, and Barrel-ctx-6s dataset (Table 1; Extended Data Table 1-1). C, Boxplot displaying the sensitivity (top panel), precision (middle panel) and F1 score (bottom panel) for Barrel-ctx-6s dataset. For each panel, we evaluated CaImAn performance as well as two different versions of CINAC (v1 and v4). CINAC_v1 is a classifier trained on data from the Hippo-dvt dataset and CINAC_v4 is a classifier trained on data from the Hippo-dvt, Hippo-6m, and Barrel-ctx-6s dataset (Table 1; Extended Data Table 1-1). D, Boxplot displaying the sensitivity (top panel), precision (middle panel) and F1 score (bottom panel) for Hippo-dvt-INs dataset. For each panel, we evaluated CaImAn performance as well as two different versions of CINAC (v1 and v7). CINAC_v1 is a classifier trained on data from the Hippo-dvt dataset and CINAC_v7 is a classifier trained on interneurons from the Hippo-dvt dataset (Table 1; Extended Data Table 1-1). Each colored dot represents a cell, the number inside indicates the cell’s id and each color represents a session as identified in the legend. Figure 8 is supported by Extended Data Figures 8-1, 8-2, 8-3.

Similar articles

Cited by

References

    1. Allene C, Picardo MA, Becq H, Miyoshi G, Fishell G, Cossart R (2012) Dynamic changes in interneuron morphophysiological properties mark the maturation of hippocampal network activity. J Neurosci 32:6688–6698. 10.1523/JNEUROSCI.0081-12.2012 - DOI - PMC - PubMed
    1. Andalman AS, Burns VM, Lovett-Barron M, Broxton M, Poole B, Yang SJ, Grosenick L, Lerner TN, Chen R, Benster T, Mourrain P, Levoy M, Rajan K, Deisseroth K (2019) Neuronal dynamics regulating brain and behavioral state transitions. Cell 177:970–985.e20. 10.1016/j.cell.2019.02.037 - DOI - PMC - PubMed
    1. Ben-Ari Y, Cherubini E, Corradetti R, Gaiarsa JL (1989) Giant synaptic potentials in immature rat CA3 hippocampal neurones. J Physiol 416:303–325. 10.1113/jphysiol.1989.sp017762 - DOI - PMC - PubMed
    1. Berens P, Freeman J, Deneux T, Chenkov N, McColgan T, Speiser A, Macke JH, Turaga SC, Mineault P, Rupprecht P, Gerhard S, Friedrich RW, Friedrich J, Paninski L, Pachitariu M, Harris KD, Bolte B, Machado TA, Ringach D, Stone J, et al. (2018) Community-based benchmarking improves spike rate inference from two-photon calcium imaging data. PLoS Comput Biol 14:e1006157. 10.1371/journal.pcbi.1006157 - DOI - PMC - PubMed
    1. Bin Y, Yang Y, Shen F, Xie N, Shen HT, Li X (2019) Describing video with attention-based bidirectional LSTM. IEEE Trans Cybern 49:2631–2641. - PubMed