Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 May 17;12(5):e0176585.
doi: 10.1371/journal.pone.0176585. eCollection 2017.

Reading positional codes with fMRI: Problems and solutions

Affiliations

Reading positional codes with fMRI: Problems and solutions

Kristjan Kalm et al. PLoS One. .

Abstract

Neural mechanisms which bind items into sequences have been investigated in a large body of research in animal neurophysiology and human neuroimaging. However, a major problem in interpreting this data arises from a fact that several unrelated processes, such as memory load, sensory adaptation, and reward expectation, also change in a consistent manner as the sequence unfolds. In this paper we use computational simulations and data from two fMRI experiments to show that a host of unrelated neural processes can masquerade as sequence representations. We show that dissociating such unrelated processes from a dedicated sequence representation is an especially difficult problem for fMRI data, which is almost exclusively the modality used in human experiments. We suggest that such fMRI results must be treated with caution and in many cases the assumed neural representation might actually reflect unrelated processes.

PubMed Disclaimer

Conflict of interest statement

Competing Interests: The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Sequence representation and temporal position.
(A) Representation of two sequences as mappings between item codes and temporal position codes. (B) Left: representation of temporal position in a 7-item sequence. The variance around positional signal is coded in terms of the darkness of the circle. Right: the order position is retrieved by reinstating each positional code which then cues the associated item. (C) Examples of temporal position selective neurons from [7]. From left to right: pre-supplementary motor area neuron selective for 1st position, supplementary eye field neuron selective for 2nd position, and supplementary motor area neuron selective for the 3rd position in the serial object task.
Fig 2
Fig 2. Sensory adaptation in the sensory cortex and decoding order position.
(A) Uniform signal change over 3-item sequences in sensory brain areas averaged across participants. Data from visual regions V1, V2, pericalcarine, and lateraloccipital regions is from [33]. Data from auditory areas Heschl’s gyrus (HG) and superior temporal sulcus (STS) is from [32]. (B) Distribution of average linear classification accuracy values of item position in V1 region across participant’s from [33]. Bar charts display the average classification accuracy across participants by comparing the known positions (labels) to the predictions made by the classification algorithm. Bars show the proportion of predicted values for each position. Correct classifications are represented with a darker bar. Error bars show the standard error of the mean. The red line depicts the chance level classification accuracy 1/3.
Fig 3
Fig 3. Interference between task phases: Retinotopic suppression.
(A) Activation and suppression in V1 averaged across all stimuli for a single participant. The activated voxels (yellow, p < 0.001) mark the foveal part of the visual cortex driven by the stimuli (presented at 6° visual angle). (B) Peristimulus time histogram of sequence presentation of two groups of voxels from a single participant’s V1. The black line denotes the average of the voxels activated by the stimuli and the red line denotes the average of the voxels suppressed by the stimuli. Dashed vertical lines indicate the time bins where sequence items were presented.
Fig 4
Fig 4. LDA of item position in V1 using different subsets of voxels.
Top row: all voxels from V1; middle row: only retinotopically activated voxels from V1; bottom row: only retinotopically suppressed voxels from V1. Left column: Bar charts display the average classification accuracy across participants by comparing the known positions (labels) to the predictions made by the classification algorithm. Bars show the proportion of predicted values for each position. Correct classifications are represented with a darker bar. Error bars show the standard error of the mean. The red line depicts the chance level classification accuracy 1/3.Right column: LDA between-class boundaries based on two voxels from the set. Data from [33].
Fig 5
Fig 5. Simulated responses to items.
(A) Item patterns over 20 voxels. (B) Six sequences as permutations of three items. Item codes are displayed on the top of x-axis, position codes at the bottom.
Fig 6
Fig 6. The scatter of item patterns and LDA between-class boundaries based on the two most informative voxels.
(A) Item information. (B) Position information.
Fig 7
Fig 7. Simulation of sensory adaptation.
(A) Voxels’ responses with sensory adaptation. (B) Average responses of voxels as column-wise means of the response matrix. (C) LDA between-class boundaries based on the two most informative voxels.(D) Distribution of average LDA accuracy values (based on 250 simulations).
Fig 8
Fig 8. Simulation of sensory adaptation, z-scoring.
(A) Voxels’ responses with sensory adaptation.; z-scored. (B) Average responses of voxels as column-wise means of the response matrix.
Fig 9
Fig 9. Simulation of sensory adaptation.
(A) Voxel response matrix based on positional preferences. (B) Voxel response matrix: positional preferences + sensory adaptation. (C) Voxel response matrix: positional preferences + sensory adaptation + Gaussian noise. (D) Voxel response matrix z-scored. Column-wise means are zero.
Fig 10
Fig 10. The transformation of response values for two voxels as a result of interference (β = 0.5).
Small circular markers depict response patterns, larger circular markers depict pattern means. Empty markers depict the original patterns and means, filled markers depict the data after simulating the interference process. Solid lines depict the movement of class means as a result of interference. (A) Item information. (B) Position information.
Fig 11
Fig 11. LDA between-class boundaries for two voxels, interference β = 0.5.
(A) Item information. (B) Position information.
Fig 12
Fig 12. Linear classification accuracy of item identity (black) and position (red) as a function of additive interference (as represented by the β parameter, Eq (4)).
The red dotted line shows chance level classification accuracy. Error bars depict SEM based on 1,000 simulations of the interference process with fixed parameter values.
Fig 13
Fig 13. Positional pattern similarity decreases as a function of lag.
Similarity matrix on the left shows average positional pattern similarity, as measured by Pearson’s ρ, based on additive interference with β = 0.8. Plot on the right visualises this similarity as a function of positional lag. The red line depicts a statistically significant negative slope over positional lag (p < 0.05).
Fig 14
Fig 14. The size of the positional lag effect as a function of additive interference (β).
Error bars depict SEM based on 1,000 simulations of the interference process with fixed parameter values.
Fig 15
Fig 15. Classification accuracy and positional similarity as simulated by the proportional interference mechanism.
Error bars depict SEM based on 1,000 simulations of the interference process. Notice that β values on the x-axis have been approximately halved since the parameter now indicated the proportion of residual activity. (A) Linear classification accuracy of item identity (black) and position (red) as a function of proportional interference (β). The red dotted line shows chance level classification accuracy. (B) The size of the positional lag effect as a function of proportional interference (β).
Fig 16
Fig 16. A serial recall task based on [33].
Fig 17
Fig 17. The average simulated activity of two sets of voxels, each sensitive either to the presentation or recall phase of the task.
In this hypothetical task a presentation of three items in a sequence is followed by recall of three items. (A) Without interference. (B) Additive interference.
Fig 18
Fig 18. Temporal interference in fMRI.
(A) The haemodynamic response function (HRF) with the vertical line representing the corresponding neural event. (B) Temporal interference between two adjacent events: black and red lines.

Similar articles

Cited by

References

    1. Henson R, Burgess N. Representations of Serial Order. The Quarterly Journal of Experimental Psychology. 1997;(789284387).
    1. Page M, Norris D. The Primacy Model: A New Model of Immediate Serial Recall. Psychological Review. 1998;105(4):761–781. 10.1037/0033-295X.105.4.761-781 - DOI - PubMed
    1. Nakajima T, Hosaka R, Mushiake H, Tanji J. Covert representation of second-next movement in the pre-supplementary motor area of monkeys. Journal of neurophysiology. 2009;101(4):1883–9. 10.1152/jn.90636.2008 - DOI - PubMed
    1. Averbeck B, Crowe D, Chafee M, Georgopoulos A. Neural activity in prefrontal cortex during copying geometrical shapes. II. Decoding shape segments from neural ensembles. Experimental brain research. 2003;150(2):142–53. 10.1007/s00221-003-1417-5 - DOI - PubMed
    1. Inoue M, Mikami A. Prefrontal activity during serial probe reproduction task: encoding, mnemonic, and retrieval processes. Journal of neurophysiology. 2006;95(2):1008–41. 10.1152/jn.00552.2005 - DOI - PubMed