Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 Apr;78(3):923-37.
doi: 10.3758/s13414-016-1059-x.

Heuristic use of perceptual evidence leads to dissociation between performance and metacognitive sensitivity

Affiliations

Heuristic use of perceptual evidence leads to dissociation between performance and metacognitive sensitivity

Brian Maniscalco et al. Atten Percept Psychophys. 2016 Apr.

Abstract

Zylberberg et al. [Zylberberg, Barttfeld, & Sigman (Frontiers in Integrative Neuroscience, 6; 79, 2012), Frontiers in Integrative Neuroscience 6:79] found that confidence decisions, but not perceptual decisions, are insensitive to evidence against a selected perceptual choice. We present a signal detection theoretic model to formalize this insight, which gave rise to a counter-intuitive empirical prediction: that depending on the observer's perceptual choice, increasing task performance can be associated with decreasing metacognitive sensitivity (i.e., the trial-by-trial correspondence between confidence and accuracy). The model also provides an explanation as to why metacognitive sensitivity tends to be less than optimal in actual subjects. These predictions were confirmed robustly in a psychophysics experiment. In a second experiment we found that, in at least some subjects, the effects were replicated even under performance feedback designed to encourage optimal behavior. However, some subjects did show improvement under feedback, suggesting the tendency to ignore evidence against a selected perceptual choice may be a heuristic adopted by the perceptual decision-making system, rather than reflecting inherent biological limitations. We present a Bayesian modeling framework that explains why this heuristic strategy may be advantageous in real-world contexts.

Keywords: Bayesian modeling; Signal detection theory; Visual awareness.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Experimental design for Experiments 1 and 2. Participants performed a simple spatial two-alternative forced choice (2AFC) task. Two circular patches of visual noise were presented to the left and right of fixation. One of the patches contained an embedded sinusoidal grating. After stimulus presentation, participants provided a stimulus judgment (which side contained the grating?) and a confidence judgment (how confident are you that your response was correct?). When the grating was presented on one side of the screen, its contrast was constant throughout the experiment (S1 stimulus). When it was presented on the other side, it could take on one of five possible contrasts (S2 stimulus). Thus, the design manipulated S2 stimulus strength in the same way depicted in Figure 3. Mapping of S1 and S2 stimuli to left and right sides of the screen was counterbalanced across participants. In Experiment 2, the confidence judgment was replaced by a point-wagering system in which participants won or lost the number of wagered points depending on task accuracy. In Experiment 2, participants also received performance feedback after every trial and after every block.
Figure 2
Figure 2
(A) The two-dimensional signal detection theory model of discrimination tasks. The observer decides whether evidence presented on a given trial belonged to some arbitrary stimulus class, S1 or S2. The observer's perception can be summarized by a pair of numbers (eS1, eS2), representing the evidence in favor of each stimulus category, respectively. These pairs of numbers (eS1, eS2) are assumed to follow bivariate Gaussian distributions, depicted by concentric circles. The optimal response strategy is to respond “S1” if eS1 – eS2 > 0, and “S2” otherwise. The dotted line represents this criterion; the gray shaded region denotes regions in which the pair (eS1, eS2) will elicit an “S1” response. (B) Balance of Evidence rule for confidence rating. The optimal strategy is to rely on the same quantity used to make the decision, i.e. eS1 – eS2, which means confidence criteria will be placed along the S1 – S2 axis. Light-, medium-, and dark-shaded regions denote low, medium, and high confidence, respectively. (C) Response-Congruent Evidence rule for confidence rating. Confidence is rated only along the axis of the chosen response category, such that confidence in an “S1” response will be determined solely by the value of eS1 and will ignore the value of eS2. As before, light-, medium-, and dark-shaded regions denote low, medium, and high confidence, respectively.
Figure 3
Figure 3
Decreasing metacognitive sensitivity with increasing task performance according to the Response-Congruent Evidence rule. The strength of the S2 stimulus varies, while the strength of the S1 stimulus is held constant. For clarity, we show only one confidence criterion, dividing the “S1” response region into “high” (dark gray) and “low” (light gray) confidence responses. We also depict more contour lines here than in Figure 2 for the sake of illustration. (A) Shows a situation in which the magnitude of the S2 stimulus is relatively weak (reflected by the small mean value for eS2 in the S2 distribution relative to the mean value for eS1 in the S1 distribution), while (B) shows a relatively strong S2 stimulus. Discrimination task performance (indexed here as the Euclidean distance between the means of the evidence distributions for S1 and S2, relative to their common standard deviation) is higher in panel B than in A. However, metacognitive sensitivity for “S1” responses is superior in panel A. To see why, note that the fraction of correct “S1” responses endorsed with high confidence (proportion of the S1 distribution above the diagonal colored in dark gray) is the same in A and B, but the fraction of incorrect “S1” responses endorsed with high confidence (proportion of the S2 distribution above the diagonal colored in dark gray) is higher in B than in A. This means that in panel B, confidence rating for “S1” responses is less diagnostic of accuracy. Thus, the Response-Congruent Evidence rule predicts a dissociation between task performance and metacognitive sensitivity under these conditions.
Figure 4
Figure 4
Model predictions and results of Experiment 1. (A) Our SDT simulation (see Methods and Supplemental Material) shows that the Response-Congruent Evidence rule (RCE) predicts a dissociation between task performance (d’) and metacognitive efficiency (meta-d’), due to metacognitive assessments of confidence for “S1” responses (red) becoming less diagnostic of task performance as task performance increases. In contrast, the Balance of Evidence rule (BE) predicts that metacognitive sensitivity ought to only increase with increasing task performance. (B) Experiment 1 results demonstrate good qualitative match to the simulated predictions of the Response-Congruent Evidence rule but not the Balance of Evidence rule. Error bars represent within-subject standard errors (Morey, 2008).
Figure 5
Figure 5
Data for individual participants in Experiment 1. By treating individual experimental sessions as independent observations, meta-d’ curves were analyzed for each individual participant. One experimental session from Participant 4 was omitted due to noisy data. All participants displayed patterns consistent with the predictions of the Response-Congruent decision rule. Error bars represent within-subject standard errors (Morey, 2008).
Figure 6
Figure 6
Data for individual participants for Experiment 2. Although Participants 1 and 4 updated their response strategies to display patterns more similar to those predicted by the Balance of Evidence rule, Participants 2 and 3 displayed patterns qualitatively similar to the predictions of the Response-Congruent Evidence decision rule. Error bars represent within-subject standard errors (Morey, 2008).
Figure 7
Figure 7
Results of Experiment 2 pooled across all participants, regardless of decision strategy. In contrast to the results of Experiment 1, the response-conditional meta-d’ curves in the Experiment 2 averaged data more closely followed the predicted patterns of the Balance of Evidence decision rule, rather than the Response-Congruent Evidence rule. Meta-d’ increased for both “S1” and “S2” responses as d’ increased, and both meta-d’ curves closely tracked SDT expectation. Error bars represent within-subject standard errors (Morey, 2008).

Similar articles

Cited by

References

    1. Beck JM, Ma WJ, Kiani R, Hanks T, Churchland AK, Roitman J, Pouget A. Probabilistic Population Codes for Bayesian Decision Making. Neuron. 2008;60(6):1142–1152. - PMC - PubMed
    1. Brainard DH. The Psychophysics Toolbox. Spatial Vision. 1997;10(4):433–436. - PubMed
    1. Brenner LA, Koehler DJ, Liberman V, Tversky A. Overconfidence in Probability and Frequency Judgments: A Critical Examination. Organizational Behavior and Human Decision Processes. 1996;65(3):212–219. doi: http://dx.doi.org/10.1006/obhd.1996.0021. - DOI
    1. Charles L, Van Opstal F, Marti S, Dehaene S. Distinct brain mechanisms for conscious versus subliminal error detection. NeuroImage. 2013;73:80–94. doi:10.1016/j.neuroimage.2013.01.054. - PMC - PubMed
    1. Fellner G, Krügel S. Judgmental overconfidence: Three measures, one bias? Journal of Economic Psychology. 2012;33(1):142–154. doi:10.1016/j.joep.2011.07.008.

Publication types