Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Jan;20(1):115-125.
doi: 10.1038/nn.4450. Epub 2016 Dec 5.

Shared memories reveal shared structure in neural activity across individuals

Affiliations

Shared memories reveal shared structure in neural activity across individuals

Janice Chen et al. Nat Neurosci. 2017 Jan.

Abstract

Our lives revolve around sharing experiences and memories with others. When different people recount the same events, how similar are their underlying neural representations? Participants viewed a 50-min movie, then verbally described the events during functional MRI, producing unguided detailed descriptions lasting up to 40 min. As each person spoke, event-specific spatial patterns were reinstated in default-network, medial-temporal, and high-level visual areas. Individual event patterns were both highly discriminable from one another and similar among people, suggesting consistent spatial organization. In many high-order areas, patterns were more similar between people recalling the same event than between recall and perception, indicating systematic reshaping of percept into memory. These results reveal the existence of a common spatial organization for memories in high-level cortical areas, where encoded information is largely abstracted beyond sensory constraints, and that neural patterns during perception are altered systematically across people into shared memory representations for real-life events.

PubMed Disclaimer

Figures

Figure 1
Figure 1. Experiment design and behavior
A) In Run 1, participants viewed a 50-minute movie, BBC’s Sherlock (episode 1). Images in the figure are blurred for copyright reasons; in the experiment, movies were shown at full clarity. B) In the immediately following Run 2, participants verbally recounted aloud what they recalled from the movie. Instructions to “retell what you remember in as much detail as you can” were provided before the start of the run. No form of memory cues, time cues, or any auditory/visual input were provided during the recall session. Speech was recorded via microphone. C) Diagram of scene durations and order for movie viewing and spoken recall in a representative participant. Each rectangle shows, for a given scene, the temporal position (location on y-axis) and duration (height) during movie viewing, and the temporal position (location on x-axis) and duration (width) during recall. D) Summary of durations and order for scene viewing and recall in all participants. Each line segment shows, for a given scene, the temporal position and duration during movie viewing and during recall; i.e., a line segment in [D] corresponds to the diagonal of a rectangle in [C]. Each color indicates a different participant (N=17). See also Tables S1, S2.
Figure 2
Figure 2. Pattern similarity between movie and recall
A) Schematic for within-participant movie-recall (reinstatement) analysis. BOLD data from the movie and from the recall sessions were divided into scenes, then averaged across time within-scene, resulting in one vector of voxel values for each movie scene and one for each recalled scene. Correlations were computed between matching pairs of movie/recalled scenes within participant. Statistical significance was determined by shuffling scene labels to generate a null distribution of the participant average. B) Searchlight map showing where significant reinstatement was observed; FDR correction q = 0.05, p = 0.012. Searchlight was a 5×5×5 voxel cube. C) Reinstatement values for all 17 participants in independently-defined PMC. Red circles show average correlation of matching scenes and error bars show standard error across scenes; black squares show average of the null distribution for that participant. At far right, the red circle shows the true participant average and error bars show standard error across participants; black histogram shows the null distribution of the participant average; white square shows mean of the null distribution. D) Schematic for between-participants movie-recall analysis. Same as [A], except that correlations were computed between every matching pair of movie/recall scenes between participants. E) Searchlight map showing regions where significant between-participants movie-recall similarity was observed; FDR correction q = 0.05, p = 0.007. F) Reinstatement values in PMC for each participant in the between-participants analysis, same notation as [C].
Figure 3
Figure 3. Between-participants pattern similarity during spoken recall
A) Schematic for between-participants recall-recall analysis. BOLD data from the recall sessions were divided into matching scenes, then averaged across time within each voxel, resulting in one vector of voxel values for each recalled scene. Correlations were computed between every matching pair of recalled scenes. Statistical significance was determined by shuffling scene labels to generate a null distribution of the participant average. B) Searchlight map showing regions where significant recall-recall similarity was observed; FDR correction at q = 0.05, p = 0.012. Searchlight was a 5×5×5 voxel cube. C) Recall-recall correlation values for all 17 participants in independently-defined PMC. Red circles show average correlation of matching scenes and error bars show standard error across scenes; black squares show average of the null distribution for that participant. At far right, the red circle shows the true participant average and error bars show standard error across participants; black histogram shows the null distribution of the participant average; white square shows mean of the null distribution.
Figure 4
Figure 4. Classification accuracy
A) Classification of movie scenes between brains. Participants were randomly assigned to one of two groups (N=8 and N=9), an average was calculated within each group, and data were extracted for the PMC ROI. Pairwise correlations were calculated between the two group means for all 50 movie scenes. Accuracy was calculated as the proportion of scenes correctly identified out of 50. The entire procedure was repeated using 200 random combination of two groups sized N=8 and N=9 (green markers), and an overall average calculated (36.7%, black bar; chance level [2.0%] plotted in red). B) Classification rank for individual movie scenes (i.e., the rank of the matching scene correlation in the other group among all 50 scene correlations). Green markers show the results from each combination of two groups sized N=8 and N=9; black bars show the average over all group combinations, 4.8 on average. (* indicates values passing the FDR-corrected threshold of q = 0.001.) Striped bars indicate introductory video clips at the beginning of each functional scan (see Methods). C) Classification of recalled scenes between brains. Same analysis as in (A) except that sufficient data were extant for 41 scenes. Overall classification accuracy was 15.8% (black bar, chance level 2.4%). D) Classification rank for individual recalled scenes, 9.5 on average. (* indicates values passing the FDR-corrected threshold of q = 0.001.)
Figure 5
Figure 5. Dimensionality of shared patterns
In order to quantify the number of distinct dimensions of the spatial patterns that are shared across brains and can contribute to the classification of neural responses, we used the Shared Response Model (SRM). This algorithm operates over a series of data vectors (in this case, multiple participants’ brain data) and finds a common representational space of lower dimensionality. Using SRM in the PMC region, we asked: when the data are reduced to k dimensions, how does this affect scene-level classification across brains? How many dimensions generalize from movie to recall? A) Results when using the movie data in the PMC region (movie-movie). Classification accuracy improves as the number of dimensions k increases, starting to plateau around 15, but still rising at 50 dimensions. (Chance level 0.04.) B) Results when training SRM on the movie data and then classifying recall scenes across participants in the PMC region (recall-recall). Classification accuracy improves as the number of dimensions increases, with maximum accuracy being reached at 12 dimensions. Note that there could be additional shared dimensions, unique to the recall data, that would not be accessible via these analyses.
Figure 6
Figure 6. Scene-level pattern similarity between individuals
Visualization of the signal underlying pattern similarity between individuals, for fourteen scenes that were recalled by all sixteen of the participants in these groups, are presented in [B–E]. See Fig. S5 for correlation values for all scenes. A) In order to visualize the underlying signal, we randomly split the movie-viewing data into two independent groups of equal size (N=8 each) and averaged BOLD values across participants within each group. An average was made in the same manner for the recall data using the same two groups of eight. These group mean images were then averaged across timepoints and within scene, exactly as in the prior analyses, creating one brain image per group per scene. Sagittal view of these average brains during one representative scene (36) of the movie is shown for each group. Average activity in a posterior medial area (white box in [A]) on the same slice for the fourteen different scenes for Movie Group 1 (B), Movie Group 2 (C), Recall Group 1 (D), and Recall Group 2 (E). Searchlight size shown as a red outline. Our data indicate that cross-participant pattern alignment was strong enough to survive spatial transformation of brain data to a standard anatomical space. While the current results reveal a relatively coarse spatial structure that is shared across people, they do not preclude the existence of finer spatial structure in the neural signal that may be captured when comparisons are made within-participant or using more sensitive methods such as hyperalignment. Further work is needed to understand the factors that influence the balance of idiosyncratic and shared signals between brains. See Methods: Spatial resolution of neural signals.
Figure 7
Figure 7. Alteration of neural patterns from perception to recollection
A) Schematic of neural activity patterns during a movie scene being modified into activity patterns at recall. For each brain, the neural patterns while viewing a given movie scene are expressed as a common underlying pattern. Each of these Movie patterns is then altered in some manner to produce the Recall pattern. Left panel: If patterns are changed in a unique way within each person’s brain, then each person’s movie pattern is altered by adding an “alteration” pattern that is uncorrelated with the “alteration” patterns of other people. In this scenario, Recall patterns necessarily become more dissimilar to the Recall patterns of other people than to the Movie pattern. Right panel: Alternatively, if a systematic change is occurring across people, each Movie pattern is altered by adding an “alteration” pattern that is correlated with the “alteration” patterns of other people. Thus, Recall patterns for a given scene may become more similar to the Recall patterns of other people than to the Movie pattern. B) Searchlight map showing regions where recall-recall similarity was significantly greater than between-participants movie-recall similarity, i.e., where the map from Fig. 3B was stronger than the map from Fig. 2E. The analysis revealed regions in which neural representations changed in a systematic way across individuals between perception and recollection. C) We tested whether each participant’s individual scene recollection patterns could be classified better using 1) the movie data from other participants, or 2) the recall data from other participants. A t-test of classification rank was performed between these two sets of values at each searchlight shown in (B). Classification rank was higher when using the recall data as opposed to the movie data in 99% of such searchlights. Histogram of t-values is plotted.
Figure 8
Figure 8. Reinstatement in individual participants vs. between participants
A) Searchlight analysis showing similarity of representational dissimilarity matrices (RDMs) within-participant across the brain. Each RDM was composed of the pairwise correlations of patterns for individual scenes in the movie (“movie-RDM”) and separately during recall (“recall-RDM”). Each participant’s movie-RDM was then compared to his or her own recall-RDM (i.e., within-participant) using Pearson correlation. The average searchlight map across 17 participants is displayed. B) Searchlight analysis showing movie-RDM vs. recall-RDM correlations between participants. The average searchlight map across 272 pairwise combinations of participants is displayed. C) The difference was computed between the within-participant and between-participant maps. Statistical significance of the difference was evaluated using a permutation analysis and FDR corrected at a threshold of q=0.05. A cluster of two voxels located in the temporo-parietal junction survived correction (map shown at q=0.10 for visualization purposes, 5-voxel cluster).

Comment in

  • Cracking the mnemonic code.
    Patai EZ, Spiers HJ. Patai EZ, et al. Nat Neurosci. 2016 Dec 27;20(1):8-9. doi: 10.1038/nn.4466. Nat Neurosci. 2016. PMID: 28025980 No abstract available.

Similar articles

Cited by

References

    1. Isola P, Xiao J, Torralba A, Oliva A. What makes an image memorable?; 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2011. pp. 145–152.
    1. Halbwachs M. The Collective Memory. Harper & Row Colophon Books; 1980.
    1. Sperber D. Explaining Culture: A Naturalistic Approach. Blackwell Publishers; 1996.
    1. Coman A, Hirst W. Cognition through a social network: The propagation of induced forgetting and practice effects. J. Exp. Psychol. Gen. 2012;141:321–336. - PubMed
    1. Roediger HL, Abel M. Collective memory: a new arena of cognitive study. Trends Cogn. Sci. 2015;19:359–361. - PubMed

Methods-Only References

    1. McGuigan P. A Study in Pink. Sherlock; 2010.
    1. Fleischer D. Let’s All Go to the Lobby. Filmack; 1957.
    1. Stephens GJ, Silbert LJ, Hasson U. Speaker�listener neural coupling underlies successful communication. Proc. Natl. Acad. Sci. 2010;107:14425–14430. - PMC - PubMed
    1. Silbert LJ, Honey CJ, Simony E, Poeppel D, Hasson U. Coupled neural systems underlie the production and comprehension of naturalistic narrative speech. Proc. Natl. Acad. Sci. 2014;111:E4687–E4696. - PMC - PubMed
    1. Desikan RS, et al. An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. NeuroImage. 2006;31:968–980. - PubMed

Publication types