Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Mar;25(3):358-368.
doi: 10.1038/s41593-022-01020-w. Epub 2022 Mar 7.

Neurons detect cognitive boundaries to structure episodic memories in humans

Affiliations

Neurons detect cognitive boundaries to structure episodic memories in humans

Jie Zheng et al. Nat Neurosci. 2022 Mar.

Abstract

While experience is continuous, memories are organized as discrete events. Cognitive boundaries are thought to segment experience and structure memory, but how this process is implemented remains unclear. We recorded the activity of single neurons in the human medial temporal lobe (MTL) during the formation and retrieval of memories with complex narratives. Here, we show that neurons responded to abstract cognitive boundaries between different episodes. Boundary-induced neural state changes during encoding predicted subsequent recognition accuracy but impaired event order memory, mirroring a fundamental behavioral tradeoff between content and time memory. Furthermore, the neural state following boundaries was reinstated during both successful retrieval and false memories. These findings reveal a neuronal substrate for detecting cognitive boundaries that transform experience into mnemonic episodes and structure mental time travel during retrieval.

PubMed Disclaimer

Conflict of interest statement

Competing Interests Statement

Authors declare no competing interests.

Figures

Extended Data Fig. 1
Extended Data Fig. 1. Electrode locations in MNI coordinates, Related to Fig. 1
a-c, Each dot is the location of a microwire bundle in either the amygdala (cyan), hippocampus (yellow) or parahippocampus (red) on which at least one event or boundary cell was recorded, also presented in a template brain in Fig. 1e. Coordinates are in Montreal Neurological Institute (MNI) 152 space, here plotted on top of the CIT168 brain template for axial (a), coronal (b), and sagittal (c) view (see Methods).
Extended Data Fig. 2
Extended Data Fig. 2. Subjects’ performance in the scene recognition task did not differ significantly across different boundary types, Related to Fig. 2
a-c, Behavior quantified by accuracy (a), reaction time (b), and confidence level (c) across all trials. Results are shown for boundary type NB (green), SB (blue), and HB (red) during the scene recognition task. The horizontal dashed lines in (a) show chance levels (0.5) and in (c) show the maximum possible confidence value (3 = high confidence). Each dot represents one recording session. Black lines in (a-c) denote the mean results averaged across all recording sessions. One-way ANOVA between NB/SB/HB, degrees of freedom = (2, 57).
Extended Data Fig. 3
Extended Data Fig. 3. Boundary cells and event cells do not respond to clip onsets and clip offsets during encoding, Related to Fig. 3
a, Responses during the encoding stage from the same example boundary cells shown in Fig. 3a and Fig. 3b aligned to the clip onsets. b, Firing rates of all 42 boundary cells (solid and dashed arrows denote the examples in (a) during the encoding stage aligned to the clip onsets, averaged over trials within each boundary type and normalized to each neuron’s maximum firing rate throughout the entire task (see color scale on bottom). c, Responses during the encoding stage from the same example boundary cells shown in (a) aligned to the clip offsets. d, Firing rates of all 42 boundary cells during the encoding stage aligned to the clip offsets using the same format as (b). e, Responses during the encoding stage from the same example event cells shown in Fig. 3e and Fig. 3f aligned to the clip onsets. f, Firing rates of all 36 event cells (solid and dashed arrows denote the examples in (e) during the encoding stage aligned to clip onsets, using the same format as (b). g, Responses during the encoding stage from the same example event cells shown in (e) aligned to the clip offsets. h, Firing rates of all 36 event cells during the encoding stage aligned to the clip offsets using the same format as (b). For (a), (c), (e), (g), Top: raster plot color coded for different boundary types (green: NB; blue: SB; red: HB). Bottom: Post-stimulus time histogram (bin size = 200ms, step size = 2ms, shaded areas represented ± s.e.m. across trials). (b and f) are copied from Fig. 3d and Fig. 3h for comparison purposes.
Extended Data Fig. 4
Extended Data Fig. 4. Boundary cells and event cells do not respond to image onsets and offsets during scene recognition and time discrimination, Related to Fig. 3
a-b, Responses during scene recognition from the same example boundary cells shown in Fig. 3a and Fig. 3b aligned to stimulus onset. c, Firing rates of all 42 boundary cells (solid and dashed arrows denote the examples in a and b) during scene recognition aligned to the stimulus onsets, averaged over trials within each boundary type and normalized to each neuron’s maximum firing rate throughout the entire task (see color scale on bottom). d-e, Responses during time discrimination from the same example boundary cells shown in (a and b) aligned to stimulus onset. f, Firing rates of all 42 boundary cells during time discrimination aligned to the stimulus onset using the same format as in c. g-h, Responses during scene recognition from the same example event cells shown in Fig. 3e and Fig. 3f aligned to stimulus onsets. i, Firing rates of all 36 event cells (solid and dashed arrows denote the examples in g and h) during scene recognition aligned to the stimulus onset, using the same format as in a and b. j, Responses during time discrimination from the same example event cells shown in g and h aligned to stimulus onset. k, Firing rates of all 36 event cells during time discrimination aligned to the stimulus onsets using the same format as in f. For (a), (b), (d), (e), (g), (h), (j), (k), Top: raster plot color coded for different boundary types (green: NB; blue: SB; red: HB). Bottom: Post-stimulus time histogram (bin size = 200ms, step size = 2ms, shaded areas represented ± s.e.m. across trials).
Extended Data Fig. 5
Extended Data Fig. 5. Neurons that respond to clip onsets and clip offsets do not overlap with boundary and event cells, Related to Fig. 3
a-b, Responses during the encoding stage from an example clip onset-responsive cell located in the amygdala aligned to clip onsets (a), and boundaries (b). Top: raster plots. Bottom: Post-stimulus time histogram (bin size = 200 ms, step size = 2ms, shaded areas represented ± s.e.m. across trials). A cell was considered as a clip onset cell if its firing rate differed significantly between a 1s window immediate before and after clip onset (p < 0.05, one-tailed permutation t-test). c-d, Responses during the encoding stage from an example clip offset-responsive cell located in the hippocampus aligned to clip offsets (c), and boundaries (d). A cell was considered as a clip offset cell if its firing rate differed significantly between a 1s window immediate before and after clip offsets (p < 0.05, one-tailed permutation t-test). Same format as (a and b). e, Seventy six out of 580 cells in the MTL qualified as clip onset-responsive cells and four out of 580 cells in the MTL qualified as clip offset-responsive cells. None of these were also selected as either boundary or event cells.
Extended Data Fig. 6
Extended Data Fig. 6. Responses of boundary cells during encoding grouped by memory outcomes from the time discrimination task, Related to Fig. 4
a1-a2, Response of the same example boundary cell in Fig. 4a and Fig. 4b. During encoding, this cell responded to SB and HB transitions regardless of whether the temporal order of the clip was later correctly (a1) or incorrectly (a2) retrieved in the time discrimination test. Shaded areas represented ± s.e.m. across trials. b1- b2, Left: timing of spikes from the same boundary cell shown in (a1 and a2) relative to theta phase calculated from the local field potentials, for clips whose temporal order were later correctly (b1) or incorrectly (b2) retrieved. Right: phase distribution of spike times within [0, 1] seconds time windows following the middle of the clip (NB) or boundary (SB, HB) for clips whose temporal order were later correctly (b1) or incorrectly (b2) retrieved. c-d, Population summary for all 42 boundary cells. c, Z-scored firing rate (0–1s after boundaries during encoding) for each boundary type did not differ between clips whose temporal orders were later correctly (color filled) vs. incorrectly (empty) retrieved. d, Mean resultant length (MRL) of spike times (relative to theta phases, 0–1s after boundaries during encoding) across all boundary cells for each boundary type did not differ between clips whose temporal orders were later correctly (color filled) vs. incorrectly (empty) retrieved. Each dot represents one boundary cell. Black lines in c and d denote the mean results averaged across all boundary cells. One-tailed permutation t-test, degrees of freedom = (1, 82).
Extended Data Fig. 7
Extended Data Fig. 7. Responses of event cells during encoding grouped by memory outcomes from the scene recognition stage, Related to Fig. 4
a1-a2, Response of the same example event cell in Fig. 4e and Fig. 4f. During encoding, this cell responded to HB transitions regardless of whether frames were later correctly (a1) or incorrectly (a2) recognized in the scene recognition task. Shaded areas represented ± s.e.m. across trials. b1-b2, Left: timing of spikes from the same event cell shown in a1-a2 relative to theta phase calculated from the local field potentials, for frames that were later correctly (b1) or incorrectly (b2) recognized. Right: phase distribution of spike times within [0, 1] seconds time windows following the middle of the clip (NB) or boundary (SB, HB) for frames that were later correctly (b1) or incorrectly (b2) recognized. c-d, Population summary for all 36 event cells. c, Z-scored firing rate (0–1s after boundaries during encoding) for each boundary type did not differ between frames that were later correctly (color filled) vs. incorrectly (empty) recognized. d, Mean resultant length (MRL) of spike times (relative to theta phases, 0–1s after boundaries during encoding) across all event cells for each boundary type did not differ between frames that were later correctly (color filled) vs. incorrectly (empty) recognized. Each dot represents one event cell. Black lines in (c) and (d) denote the mean results averaged across all event cells (c, d). One-tailed permutation t-test, degree of freedom = (1, 70).
Extended Data Fig. 8
Extended Data Fig. 8. Neural state changes following soft and hard boundaries shown for individual subjects, Related to Fig. 5
Multidimensional distance (MDD, see Fig. 5d–g for definition) as a function of time aligned to the middle of the clip (green: NB) and boundaries (blue: SB, red: HB). MDD is shown for all MTL cells within each subject (e.g., “Sub1 in B1 E2 O32” denotes MDD computed by 1 boundary cell, 2 event cells and 32 other MTL cells in subject 1). Shaded areas represent ± s.e.m. across trials.
Extended Data Fig. 9
Extended Data Fig. 9. Clip-onsets responsive neurons respond to both correct and incorrect targets during scene recognition, Related to Fig. 6
a-b, Responses during scene recognition from an example clip onset-responsive cell (see definition in Extended Data Fig. 5) located in the amygdala aligned to image onsets in correctly recognized target (a) and forgotten target (b) trials. Top: raster plots. Bottom: Post-stimulus time histogram (bin size = 200 ms, step size = 2ms, shaded areas represented ± s.e.m. across trials). c, Comparison (across all 76 identified clip-onsets responsive neurons) between mean firing rates averaged within [0 1.5]s after image onsets for remembered vs forgotten targets. On each box, the central mark indicates the mean results averaged across all clip-onsets responsive neurons, and the bottom and top edges of the box indicate the 25th and 75th percentiles, respectively. The whiskers extend to the most extreme data points not considered outliers, and the outliers are plotted individually using the ‘+’ marker symbol. One-way ANOVA, degrees of freedom = (1, 150)
Fig. 1:
Fig. 1:. Experiment and recording locations.
a, Encoding task. Subjects watched 90 video clips (~ 8 seconds each, no audio) with either no boundary (NB, continuous movie shot), a soft boundary (SB, cut to a new scene within the same movie, 1 to 3 SB per clip), or a hard boundary (HB, cut to a different movie, 1 HB per clip). Every 4–8 clips, subjects were prompted to answer a Yes/No question related to the clip content, together with a confidence rating (Methods). RT = reaction time b, Example boundaries (visual features of boundaries in Supplementary Table 1). Owing to copyright restrictions, the images shown are different from those used for the experiment. c, Scene recognition memory task. Subjects indicated whether a static image was New or Old (seen during encoding task), together with a confidence rating. d, Time discrimination task. Subjects indicated which of two frames they saw first during the encoding task, together with a confidence rating. e, Recording locations of the 39 microwire bundles that contained at least one boundary/event neuron (see MNI coordinates in Supplementary Table 3 and Extended Data Fig. 1) across all subjects (subject information in Supplementary Table 2) in the amygdala (red), hippocampus (blue), or parahippocampal gyrus (cyan), rendered on a template brain. Each dot represents the location of a microwire bundle.
Fig. 2:
Fig. 2:. Behavior.
Hard boundaries impaired time discrimination memory while soft and hard boundaries improved scene recognition memory for frames close to them. a-c, Performance in time discrimination task (see also Supplementary Figs. 1–2) quantified by accuracy in a (F (2, 57) = 51.33, p = 2×10−13; one-tailed ANOVA), reaction time in b (F (2, 57) = 14.25 p = 10×10−6; one-tailed ANOVA), and mean confidence level in c (F (2, 57) = 20.41, p = 2×10−7; one-tailed ANOVA) across all the trials for NB (green), SB (blue), and HB (red) trials. Behavior data for the scene recognition task is shown in Extended Data Fig. 2. d-f, Scene recognition accuracy as a function of time elapsed between the target frame and its nearest past boundary (distance effect for time discrimination accuracy and future boundaries shown in Supplementary Fig. 3) plotted separately for NB (d), SB (e) and HB (f). For NB clips, time from the past boundary is measured relative to the middle of the clip. Each dot represents one recording session in (a-c) and one clip in (d-f). Black lines in (a-c) denote the mean of the results and colored lines in (d-f) are the fitted lines for linear regression. ***P < 0.001
Fig. 3:
Fig. 3:. Boundary cells and event cells demarcate different types of episodic transitions.
a-b, Responses during the encoding stage from two example boundary cells located in the parahippocampal gyrus and hippocampus, respectively (spike sorting quality of all detected cells shown in Supplementary Fig. 5). Boundary cells responded to both SB (blue) and HB (red) transitions. Responses are aligned to the middle point of the clip (NB, green) or to the boundary (SB, HB). Top: raster plots. Bottom: Post-stimulus time histogram (bin size = 200 ms, step size = 2ms, shaded areas represented ± s.e.m. across trials). Insets: all spike extracellular waveforms (gray) and mean (black). c-d, Firing rates of all 42 boundary cells (solid and dashed arrows denote the examples in a and b, respectively) during the encoding stage aligned to the boundaries in c or clip onsets in d, averaged over trials within each boundary type and normalized to each neuron’s maximum firing rate from the entire task recording (see color scale on bottom). e-f, Responses during the encoding stage from two example event cells located in the hippocampus and amygdala, respectively. Event cells respond to HB (red) but not SB (blue) nor NB transitions. Post-stimulus time histogram (bin size = 200 ms, step size = 2ms, shaded areas represented ± s.e.m. across trials). g-h, Firing rates of all 36 event cells (solid and dashed arrows denote the examples in e and f, respectively) during the encoding stage, using the same format as in c and d. Both boundary cells and event cells in the medial temporal lobe do not respond to the clip onsets (d, h) and clip offsets (Extended Data Fig. 3) during encoding, and image onsets and offsets during scene recognition and time discrimination (Extended Data Fig. 4). No significant difference in saccades was found after clip onsets vs after boundary transitions for one subject where we could record eye movement data simultaneously with the neurophysiological data (Supplementary Fig. 7). i, Latency analysis. Firing rate during HB transitions (to which both boundary cells and event cells responded) reached peak response earlier for boundary cells (pink) compared to event cells (purple). Shown is average z-scored firing rate normalized using the average and standard deviation of the firing rates and aligned to HB (bin size = 200 ms, step size = 2ms, shaded areas represented ± s.e.m. across all boundary cells or event cells). j, Peak times of average firing rate traces of all boundary cells (pink) and all event cells (purple) (F (1, 76) = 274.78, p = 6×10−27, one-tailed ANOVA). Each dot represents one boundary cell (pink) or one event cell (purple). Black lines denote the mean averaged across all boundary cells or event cells. ***P < 0.001, one-way ANOVA, degrees of freedom = (1, 76). The spatial distribution of boundary cells and event cells is shown in Supplementary Table 4.
Fig. 4:
Fig. 4:. Responses of boundary cells and event cells during encoding correlate with later retrieval success.
a-d, Response of boundary cells during encoding grouped by subjects’ subsequent memory performance in the scene recognition task. a1-a2, Boundary cell recorded in the hippocampus. During encoding, this cell responded more strongly to SB and HB transitions than NB if the frame following the boundary in that trial was correctly identified during the scene recognition task (a1) compared to incorrect trials (a2). Format as in Fig. 3. Shaded areas represented ± s.e.m. across trials. b1-b2, Left: timing of spikes from the same boundary cell shown in (a) relative to theta phase calculated from the local field potentials, for clips of which frames were later remembered (b1) or forgotten (b2). Right: phase distribution of spike times in the 1s period following the middle of the clip (NB) or boundary (SB, HB) for clips from which frames were remembered (b1) and forgotten (b2). c-d, Population summary for all 42 boundary cells. Black lines denote the mean results averaged across all 42 boundary cells. c, Z-scored firing rate (0–1s after boundaries during encoding) differed significantly between boundaries after which frames were remembered (color filled) vs. forgotten (empty) for both SB and HB (SB: F (1, 82) = 82.93, p = 4×10−14; HB: F (1, 82) = 156.9, p = 1×10−20; NB: F (1, 82) = 1.18, p = 0.28;one-tailed ANOVA). d, Mean resultant length (MRL) of spike times (i.e., sum of vectors with vector lengths equal to 1 and vector angels equal to the spike timings relative to theta phases 0–1s after boundaries during encoding, divided by total number of vectors; value range [0 1]: 0 = uniform distribution, i.e., neurons fire at random theta phases; 1 = unimodal distribution, i.e., neurons fire at the same theta phase) across all boundary cells for each boundary type did not differ significantly between correct (color filled) and incorrect (empty) clips. e-h, Response of boundary cells during encoding grouped by subjects’ subsequent memory performance in the time discrimination task. e1-e2, Example event cell recorded in the hippocampus that responded to HB transition regardless of whether the temporal order of the clip was later correctly (e1) or incorrectly (e2) recalled in the time discrimination task. Shaded areas represented ± s.e.m. across trials. Format same as in a but clips were grouped based on memory outcomes in the time discrimination task. f1- f2, The spike timing of the same event cell shown in (e1-e2) relative to theta phase plotted for correct (f1) and incorrect (f2) trials. Format same as in b but clips were grouped based on memory outcomes in the time discrimination task. g-h, Population summary for all 36 event cells. Black lines denote the mean results averaged across all 36 event cells. g, Z-scored firing rate (0–1s after boundaries during encoding) did not differ significantly between later correctly (color filled) or incorrectly (empty) remembered temporal orders for all three boundary types. h, MRL of spike times (relative to theta phases, 0–1s after boundaries during encoding) was significantly larger after SB and HB transitions if the temporal order of the clip was correctly recalled (color filled) compared to the incorrect ones (empty) (SB: F (1, 70) = 81.55, p = 2×10−13; HB: F (1, 70) = 60.79, p = 4×10−11; NB: F (1, 70) = 1.53, p = 0.22; one-tailed ANOVA). Each dot represents one boundary cell (in c and d) or one event cell (in g and h). Black lines (in c, d, g, h) denote the mean of the results. Note that in (a-d) the neural responses of boundary cells reflect whether subjects remembered or forgot target frames that followed a boundary. Results computed based on trials grouped by subjects’ memory performance for target frame before a boundary are shown in Supplementary Fig. 8. ***P < 0.001, one-way ANOVA, degrees of freedom = (1, 82) for (c and d) and degrees of freedom = (1, 70) for (g and h).
Fig. 5:
Fig. 5:. Population neural state shift magnitude following episodic transitions reflects subjects’ subsequent memory performance.
a-c, Trajectories in neural state space formed by the top three principal components (PCs with most explained variance: PC1 = 26.05%, PC2 = 10.89%; PC3 = 6.69%) summarizing the activity of all MTL cells during the encoding stage for clips containing NB (a), SB (b) and HB (b). Each data point indicates the neural state at a specific time relative to boundary onset (line thickness indicates time; see scale on bottom). Black dots mark the time of the boundary (SB, HB) or the middle of the clip (NB). d-g, Multidimensional distance (MDD, i.e., Euclidean distance relative to boundaries in the PC space formed by all PCs that cover explained variance ≥ 99%) as a function of time aligned to the middle of the clip (green: NB) and boundaries (blue: SB, red: HB). MDD is shown for all MTL cells (d; n = 580 in top 55 PCs space), all boundary cells (e; n = 42 in top 27 PCs space), all event cells (f; n = 36 in top 26 PCs space), and all other MTL cells (i.e., non-boundary/event cells in the MTL; g; n = 502 in top 58 PCs space). Shaded areas represent ± s.e.m. across trials. See neural state shifts within each subject in Extended Data Fig. 8. h, Latency analysis. Time when MDD shown in (d-g) reached peak value following HB (red lines) significantly differed when computed with different groups of cells (F (3, 76) = 103.96, p = 8×10−27, one-tailed ANOVA). Black lines denote the mean results averaged across different cell populations. i-j, Correlation between distance traveled in state space following boundaries and behavior. i, Positive correlation between AUC MDD (sum of Euclidean distances within [0, 1] seconds time window after boundaries in the PC space) and scene recognition accuracy. Dots mark the accuracy in the scene recognition task (x-axis) and the AUC MDD during encoding (y-axis) of the target frames plotted separately for frames that follow NB (green: r = 0.214, p = 0.256, Pearson correlation), SB (blue: r = 0.653, p = 0.002, Pearson correlation) and HB (red: r = 0.565, p = 0.009, Pearson correlation). j, Negative Correlation between the AUC MDD versus time discrimination accuracy plotted in the same format as (i) for NB (green: r = 0.212, p = 0.261, Pearson correlation), SB (blue: r = −0.273, p = 0.244, Pearson correlation) and HB (red: r = −0.677, p = 0.001, Pearson correlation).***P < 0.001, one-way ANOVA, degrees of freedom = (3, 72) in (h).
Fig. 6:
Fig. 6:. Reinstatement of neural context after boundaries during recognition.
a-d, Single-subject example. Color code indicates correlation between the population response during scene recognition (0–1.5s relative to stimulus onset) and the encoding period (sliding window of 1.5s and 100ms step size). Correlations are aligned to the middle of the clip (NB) or boundaries (SB, HB) and are shown separately for correctly recognized familiar target (a), correctly recognized novel (not seen) foils (c), forgotten target (b) and incorrectly recognized foils (false positives, d) in the scene recognition task. See the correlation plots for the rest of the subjects in Supplementary Figs. 10–11. e-h, Population summary. Correlation coefficient as shown in part (a-d), averaged across all subjects for NB (green), SB (blue), and HB (red) trials. Shaded areas represented ± s.e.m. across subjects. The grey dashed horizontal lines denote the significant threshold (p < 0.01, one-tailed permutation test, Methods). See the same analyses after excluding boundary cells and event cells and only for boundary cells and event cells (Supplementary Fig. 12). i, The reinstated neural context was located in between the boundary and the tested frame. For trials with target frames extracted after boundaries, the time distance from when the correlation coefficient peaks to the time of SB and HB (filled circles: SB: −1.26 ± 0.38 seconds, t19 = 14.68, p = 8×10−12; HB: −1.28 ± 0.48 seconds, t19 = 11.80, p = 3×10−10; one-tailed t-test) or target frames (empty circles; SB: 1.53 ± 0.61 seconds, t19 = 11.18, p = 8×10−10; HB: 1.72 ± 1.03 seconds, t19 = 7.44, p = 5×10−7; one-tailed t-test). Negative/Positive values denote the point of time of boundaries (negative) or target frames (positive) relative to when the correlation coefficient reaches its peak value. Asterisks indicate the significance of the peak correlation leading the time of target frames. See the same analyses with correlation computed using different window sizes (Supplementary Fig. 13). j-m, Population summary (confidence). Reinstatement differed between frames remembered with high (filled circles) and low (empty circles) confidence responses for “old” decisions (correct targets and incorrect foils) in SB and HB conditions but not ‘new” decisions (correct foils and incorrect targets) and NB conditions, regardless of whether they were correct or incorrect (SB, correct targets: p = 5×10−10; HB, correct targets: p = 4×10−6; NB, correct targets: p = 0.79; SB, incorrect foils: p = 5×10−7; HB, incorrect foils: p = 5×10−5; NB, incorrect foils: p =0.18; one-tailed t-test). Correlation coefficients as shown in part (e-h), averaged over [0–1]s after boundaries. n-o, Population summary (target-foil similarity). Correlation coefficients versus similarity ratings between targets and foils, plotted for correct (n; F (2, 54) = 2.182, p = 0.144; one-tailed ANOVA) and incorrect recognized foils (o; F (2, 54) = 10.67, p = 1×10−4; one-tailed ANOVA). Each dot represents one recording session. Black lines in (i-o) denote the mean results averaged across all recording sessions. ***P < 0.001.

Similar articles

Cited by

References

    1. Ezzyat Y & Davachi L What constitutes an episode in episodic memory? Psychol Sci 22, 243–252, doi:10.1177/0956797610393742 (2011). - DOI - PMC - PubMed
    1. Tulving E Episodic memory: from mind to brain. Annu Rev Psychol 53, 1–25, doi:10.1146/annurev.psych.53.100901.135114 (2002). - DOI - PubMed
    1. Radvansky GA & Zacks JM Event cognition. (Oxford University Press, 2014).
    1. Desimone R & Duncan J Neural mechanisms of selective visual attention. Annu Rev Neurosci 18, 193–222, doi:10.1146/annurev.ne.18.030195.001205 (1995). - DOI - PubMed
    1. Zacks JM et al. Human brain activity time-locked to perceptual event boundaries. Nat Neurosci 4, 651–655, doi:10.1038/88486 (2001). - DOI - PubMed

Publication types

LinkOut - more resources