Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jun 30;7(2):431-460.
doi: 10.1162/netn_a_00301. eCollection 2023.

Temporal Mapper: Transition networks in simulated and real neural dynamics

Affiliations

Temporal Mapper: Transition networks in simulated and real neural dynamics

Mengsen Zhang et al. Netw Neurosci. .

Abstract

Characterizing large-scale dynamic organization of the brain relies on both data-driven and mechanistic modeling, which demands a low versus high level of prior knowledge and assumptions about how constituents of the brain interact. However, the conceptual translation between the two is not straightforward. The present work aims to provide a bridge between data-driven and mechanistic modeling. We conceptualize brain dynamics as a complex landscape that is continuously modulated by internal and external changes. The modulation can induce transitions between one stable brain state (attractor) to another. Here, we provide a novel method-Temporal Mapper-built upon established tools from the field of topological data analysis to retrieve the network of attractor transitions from time series data alone. For theoretical validation, we use a biophysical network model to induce transitions in a controlled manner, which provides simulated time series equipped with a ground-truth attractor transition network. Our approach reconstructs the ground-truth transition network from simulated time series data better than existing time-varying approaches. For empirical relevance, we apply our approach to fMRI data gathered during a continuous multitask experiment. We found that occupancy of the high-degree nodes and cycles of the transition network was significantly associated with subjects' behavioral performance. Taken together, we provide an important first step toward integrating data-driven and mechanistic modeling of brain dynamics.

Keywords: Attractors; Dynamical systems; Mapper; Multistability; Networks; Nonlinear dynamics; Optimal transport; TDA.

PubMed Disclaimer

Figures

<b>Figure 1.</b>
Figure 1.
Deformation of the brain dynamic landscape induces transitions between stable brain states. A toy example of a dynamic landscape is shown as a colored curve in (A). The horizontal axis represents all possible brain states, that is, the state space, whereas the position of the red ball represents the current brain state. States at the local minima of the landscape (A, 1–3) are attractors—slight perturbation of the current state (e.g., red ball) leads to relaxation back to the same state. States at the local maxima of the landscape are repellers (to the left and right of state 2, unlabeled)—slight perturbation of the state pushes the system into the basin of one of the attractors. The landscape may be deformed by continuous changes in the brain structure, physiology, or the external environment, here represented abstractly as a control parameter (B). As the landscape deforms (sliding the gray plane in B), the attractors and repellers shift continuously with it, for the most part, marked by dashed lines in red and black, respectively. At critical points where an attractor and a repeller collide, there is a sudden change in the repertoire of attractors, potentially leading to a transition between attractors. The change of the landscape is commonly visualized as a bifurcation diagram (C), which keeps track of the change of attractors (red lines, 1–3) and repellers (black lines). Here “attractor” is used in a general sense, referring to both the points in the state space (the intersections between red lines and the gray plane in the bottom plane in B) and the connected components resulting from the continuous deformation of these points in the product between the state space and the parameter space (red lines in C). Due to multistability and hysteresis, the system may take different paths in the bifurcation diagram as the control parameter moves back and forth along the same line (dashed lines in C; green indicates forward paths, yellow indicates backward paths). In an even simpler form, this path dependency can be represented as a directed graph (D), denoting the sequence in which attractors are visited (color indicates forward and backward paths in C).
<b>Figure 2.</b>
Figure 2.
Attractor transition network for simulated neural dynamics. A biophysical network model (Zhang et al., 2022) is used to describe the dynamics of the brain (A). Each brain region is modeled as a pair of excitatory (E) and inhibitory (I) populations, connected by local excitatory (wEE, wEI) and inhibitory (wIE, wII) synapses. Each region is also connected to others through long-range connections (red dashed lines). The overall strength of long-range interaction is scaled by a parameter G, the global coupling. To simulate neural dynamics in a changing landscape (C), G is varied in time (B), mimicking the rise and fall of arousal during rest and tasks. The duration of the simulation is 20 min. To construct a ground-truth transition network between attractors (E), fixed points of the differential equations (Equations 4 and 5) are computed for different levels of G and classified by local linear stability analysis. Fixed points classified as attractors are shown in a bifurcation diagram (D). Each attractor traces out a continuous line in a high-dimensional space—the direct product of the state space S and the parameter space G. These lines or attractors can be identified as clusters in S × G. Each time point in (B, C) is classified as the regime of one attractor in the high-dimensional space S × G. All visited attractors constitute the nodes of the ground-truth transition network (E), colored accordingly. A directed edge links one attractor to another if there is a transition from the former to the latter in time. To examine how dynamics unfold in time in this attractor transition network (E), we construct a recurrence plot (F) that indicates the shortest path length between any two time points (the attractors visited) in the network.
<b>Figure 3.</b>
Figure 3.
Reconstructed transition network using the Temporal Mapper approach captures theoretical ground truth. (A) The basic procedures of the Temporal Mapper in reconstructing attractor transition networks from time series data. Neural time series is treated as a point cloud of N points (N time points) in an M-dimensional space (M ROIs). As the system moves between different attractors, the activation level changes discretely. The mean activation level can be used to label each discrete state or attractor, as in Figure 2D. Pairwise distance (Aii) between data points that are not temporally adjacent was used to construct the spatial k nearest neighbor (kNN) graph (Aiii). The temporal connectivity, that is, the “arrows of time,” is then added to the graph as directed edges (Aiv). To further compress the graph, nodes within a path length δ to each other are contracted to a single node in the final attractor transition network (Av). Each node of the attractor transition network can be colored to reflect the properties of the time points associated with it (e.g., ground-truth attractor labels or, when ground truth is unknown, the average brain activation level for time points associated with the node). (Bi) The attractor transition network reconstructed from simulated neural dynamics SE (the fraction of open synaptic channels; cf. Figure 2C) with k = 16 and δ = 10. (Bii) The attractor transition network reconstructed from the SE-derived BOLD signals with k = 14 and δ = 10, and further parameter perturbation analysis is provided in Figure S2. The node color in panel B reflects the rank of the average brain activation level for sample points associated with each node. (Ci and Cii) The recurrence plots defined for (Bi) and (Bii), respectively. Comparing Bi and Bii to Figure 2E and Ci, Cii to Figure 2F, we see that the reconstructions are reasonable approximations of the ground truth. Quantitatively, we evaluate the error of approximation as the dissimilarity between the reconstructed attractor transition networks and the ground-truth transition network (Gromov–Wasserstein distance, GW; green lines in Di and Dii) and the dissimilarity between their respective recurrence plots (L2 distance; green lines in Ei and Eii). The reconstruction error from the original time series is significantly lower than that of randomly permuted time series (gray bars, null distribution; red areaits 95% confidence interval).
<b>Figure 4.</b>
Figure 4.
Comparisons between reconstructed transition networks, BOLD, and dFC. (A and B) The recurrence plots of the ground-truth transition network (A-left, reproduced from Figure 2F) and the control parameter G (B), respectively. They provide a basis for comparing the reconstructed transition network using the Temporal Mapper (T-mapper) (C), the corresponding BOLD signal (D), and dFC (E). The difference between the ground-truth network (A) and the parameter G (B) reflects the organization of the underlying dynamic landscape. The greatest distinction is that the recurrence plot A is highly asymmetric compared to B. The lack of symmetry in A reflects the path dependency and hysteresis of the underlying dynamical system. From visual inspection, the reconstructed transition network (C) is the best approximation of the ground-truth network (A), especially for the asymmetric features. In contrast, the raw BOLD (D) clearly follows G (B) though some transitions are also visible. dFC (computed from BOLD in 30-TR windows) is neither an obvious representation of the ground-truth network nor that of the parameter G. Quantitatively, we computed the L2 and GW distance between each recurrent plot (C, D, E, B) to the ground truth (box in A). For both measures, Temporal Mapper produces the most similar recurrence plot to ground truth, while dFC produces the most dissimilar recurrence plot. (F–H) compare the reconstructed network (H) more directly to the ground-truth network (G) and the parameter G (F) in terms of the attractors visited at each point in time (only attractors that persisted greater than 5 TRs are shown). Colors in F and G reflect the attractor indices of the ground truth (y-axis of G) ordered by the global average brain activity (i.e., mean SE) associated with each attractor, as shown in Figure 2D. Similarly, state dynamics in the T-mapper reconstructed network (H) are ordered and colored by the global average of the simulated BOLD (rank) associated with each node. Gray areas highlight the sequence of state transitions that distinguishes nonlinear brain dynamics (G, H) from the continuous change of the control parameter (F). (I) Comparison of the T-mapper reconstructed transition network, BOLD, and dFC by the row/column averages of the corresponding recurrence plots (C–E). Since BOLD and dFC recurrence plots are symmetrical, their row and column averages are identical (red trace for BOLD, yellow trace for dFC in I). For T-mapper reconstructed transition network, the row average is the source distance (average distance from the current state to all other states; blue trace), and the column average is the sink distance (average distance from all other states to the current state; purple trace). See text for details.
<b>Figure 5.</b>
Figure 5.
Transition networks of human fMRI data differentiate tasks and reveal transitions. (A and B) The transition networks constructed from two subjects’ fMRI data in a continuous multitask experiment as examples (Gonzalez-Castillo et al., 2015). Panel A is for subject-17, among the best task performers, and panel B for subject-12, among the worst task performers. The color of each node denotes the dominant task label of the associated time points. The corresponding recurrence plots are shown in panels D and E. Panel C shows how tasks TR are distributed in the top x% highest degree nodes of the networks across all subjects (x-axis in log scale). Memory and math clearly dominate the highest degree nodes. In addition, panel F shows how task TRs are distributed over cycles of various lengths that pass through the top 2% of highest degree nodes, excluding the TRs in the high-degree nodes themselves. Rest and video dominate longer cycles. Panel G shows the average path length from each TR as a source to all other TRs (blue) or to each TR as a sink from all other TRs (red). The path length is normalized by the maximal distance for each subject. Solid lines show the averages across subjects; shaded areas show the corresponding standard errors. A smaller average distance indicates that the node being occupied is a better source (sink) for other nodes. The difference between the source distance and the sink distance is shown in H. A negative (positive) number indicates that the node occupied at the time is more of a source (sink) to other nodes.
<b>Figure 6.</b>
Figure 6.
Features of transition networks predict behavioral performance. (A–C) The overall task performance is associated with separations between the high-cognitive demand tasks (math and memory) and low-cognitive demand tasks (video and rest) over the transition network. The node-task separation is measured by the fraction of memory and math TRs in the top 2% highest degree nodes of the transition networks, which also measures the preference of video and rest for low-degree nodes (cf. Figure 5C). Subjects with a greater node-task separation have a greater percentage of correct responses (A), a faster reaction time (B), and fewer missed trials (C) across all tasks. (D–F) The length distributions of the cycles passing through the high-degree nodes. Solid lines indicate the number of cycles at each length averaged across subjects, who are split into two groups (red vs. blue) by the median of the percentage of correct responses (D), reaction time (E), or the percentage of missed trials (F). Shaded areas indicate the corresponding standard errors. An abundance of intermediate-length cycles is associated with slower reaction time (E). There are no length-specific effects on the percentage of correct responses (D) or missed trials (F). See text for related main effects (*p < 0.05, **p < 0.01, ***p < 0.001, with Tukey-HSD for multiple comparisons).

Similar articles

Cited by

References

    1. Abbott, L. F., & Chance, F. S. (2005). Drivers and modulators from push-pull and balanced synaptic input. Progress in Brain Research, 149, 147–155. 10.1016/S0079-6123(05)49011-1, - DOI - PubMed
    1. Allen, E. A., Damaraju, E., Plis, S. M., Erhardt, E. B., Eichele, T., & Calhoun, V. D. (2014). Tracking whole-brain connectivity dynamics in the resting state. Cerebral Cortex, 24(3), 663–676. 10.1093/cercor/bhs352, - DOI - PMC - PubMed
    1. Baker, A. P., Brookes, M. J., Rezek, I. A., Smith, S. M., Behrens, T., Probert Smith, P. J., & Woolrich, M. (2014). Fast transient networks in spontaneous human brain activity. eLife, 3, e01867. 10.7554/eLife.01867, - DOI - PMC - PubMed
    1. Ban, H., & Kalies, W. D. (2006). A computational approach to Conley’s decomposition theorem. Journal of Computational and Nonlinear Dynamics, 1(4), 312–319. 10.1115/1.2338651 - DOI
    1. Barber, A. D., Lindquist, M. A., DeRosse, P., & Karlsgodt, K. H. (2018). Dynamic functional connectivity states reflecting psychotic-like experiences. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 3(5), 443–453. 10.1016/j.bpsc.2017.09.008, - DOI - PMC - PubMed

LinkOut - more resources