Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Jan;126(1):89-132.
doi: 10.1037/rev0000128. Epub 2018 Oct 18.

Variable precision in visual perception

Affiliations

Variable precision in visual perception

Shan Shen et al. Psychol Rev. 2019 Jan.

Abstract

Given the same sensory stimuli in the same task, human observers do not always make the same response. Well-known sources of behavioral variability are sensory noise and guessing. Visual short-term memory (STM) studies have suggested that the precision of the sensory noise is itself variable. However, it is unknown whether precision is also variable in perceptual tasks without a memory component. We searched for evidence for variable precision in 11 visual perception tasks with a single relevant feature, orientation. We specifically examined the effect of distractor stimuli: distractors were absent, homogeneous and fixed across trials, homogeneous and variable, or heterogeneous and variable. We first considered 4 models: with and without guessing, and with and without variability in precision. We quantified the importance of both factors using 6 metrics: factor knock-in difference, factor knock-out difference, and log factor posterior ratio, each based on AIC or BIC. According to all 6 metrics, we found strong evidence for variable precision in 5 experiments. Next, we extended our model space to include potential confounding factors: the oblique effect and decision noise. This left strong evidence for variable precision in only 1 experiment, in which distractors were homogeneous but variable. Finally, when we considered suboptimal decision rules, the evidence also disappeared in this experiment. Our results provide little evidence for variable precision overall and only a hint when distractors are variable. Methodologically, the results underline the importance of including multiple factors in factorial model comparison: Testing for only 2 factors would have yielded an incorrect conclusion. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing financial interests.

Figures

Figure A1.
Figure A1.
Experiment 1. (A) Complete model comparison. Mean and s.e.m. of the difference in AIC (top) and BIC (bottom) between each model and the Full model. (B) Proportion of reporting “right” as a function of stimulus orientation: data and model fits.
Figure A2.
Figure A2.
Experiment 2. (A) Complete model comparison. Mean and s.e.m. of the difference in AIC (top) and BIC (bottom) between each model and the Full model. (B) Proportion of reporting “clockwise” as a function of orientation difference between the target and the reference. (C) Proportion correct as a function of the reference orientation: data and model fits.
Figure A3.
Figure A3.
Experiment 3. (A) Complete model comparison. Mean and s.e.m. of the difference in AIC (top) and BIC (bottom) between each model and the Full model. (B) Proportion of reporting “right” as a function of target orientation: data and model fits.
Figure A4.
Figure A4.
Experiment 4. (A) Complete model comparison. Mean and s.e.m. of the difference in AIC (top) and BIC (bottom) between each model and the Full model. (B) Proportion of reporting “right” as a function of set size and target orientation: data and model fits.
Figure A5.
Figure A5.
Experiment 5. (A) Complete model comparison. Mean and s.e.m. of the difference in AIC (top) and BIC (bottom) between each model and the Full model. (B) Proportion of reporting “right” as a function of target orientation.
Figure A6.
Figure A6.
Experiment 6. (A) Complete model comparison. Mean and s.e.m. of the difference in AIC (top) and BIC (bottom) between each model and the Full model. (B) Proportion of reporting “right” as a function of set size and target orientation: data and model fits.
Figure A7.
Figure A7.
Experiment 7. (A) Complete model comparison. Mean and s.e.m. of the difference in AIC (top) and BIC (bottom) between each model and the Full model. (B) Proportion of reporting “right” as a function of target orientation: data and model fits.
Figure A8.
Figure A8.
Experiment 8. (A) Complete model comparison. Mean and s.e.m. of the difference in AIC (top) and BIC (bottom) between each model and the Full model. (B) Proportion of reporting “present” as a function of set size, target presence, and the common orientation of the distractors: data and model fits.
Figure A9.
Figure A9.
Experiment 9. (A) Complete model comparison. Mean and s.e.m. of the difference in AIC (top) and BIC (bottom) between each model and the Full model. (B) Proportion of reporting “right” as a function of target orientation.
Figure A10.
Figure A10.
Experiment 10. (A) Complete model comparison. Mean and s.e.m. of the difference in AIC (top) and BIC (bottom) between each model and the Full model. (B) Proportion of reporting “right” as a function of target orientation.
Figure A11.
Figure A11.
Complete model comparison in Experiment 11. Mean and s.e.m. of the difference in AIC (top) and BIC (bottom) between each model and the Full model..
Figure A12.
Figure A12.
Crossing the suboptimal decision rules with the GODV factor models in Experiment 7. As Figure 10A, but computed with AIC. Results are similar to those with BIC.
Figure A13.
Figure A13.
Effects of set size in Experiment 4. Even though proportion correct increases as a function of set size (A), mean precision decreases with set size both when estimated with the Full model (Figure 13) and with the ODV model (B). Error bars denote · 1 s.e.m.
Figure A14.
Figure A14.
Generative model of Experiment 2. Each node represents a random variable, each arrow a conditional probability distribution. Notations of variables are as follows. C: nature of the world, “clockwise” or “counterclockwise”; Δs: difference between target orientation and the reference orientation, “clockwise” when positive; sref: reference orientation; sT: target orientation; xref: reference measurement; xT: target measurement. Distributions are shown in the equations on the side. N(x; μ, σ2) denotes a Gaussian distribution with a mean of μ and a variance of σ2. H(x) denotes the Heaviside step function. U(a, b) denotes the uniform distribution in a range between a and b. δ(x) is the Dirac delta function. This diagram specifies the distribution of the measurements, xref and xT. The optimal observer inverts the generative model and computes the conditional probability of C given xref and xT.
Figure 1.
Figure 1.
Experimental designs. The left column shows the trial procedure and the right column shows the orientation distribution of the stimuli.
Figure 2.
Figure 2.
Generative model and factors that might affect behavioral variability. (A) The diagram shows the generic generative model of our tasks. Each node represents a variable and each arrow between two nodes represents a conditional dependence. Factors that might affect behavioral variability are listed to the right of the diagram. Here, we test the bold-faced ones: oblique effect, residual variable precision, decision noise and guessing. (B) We model the dependence of precision J on orientation s (the oblique effect) as J=J0(1+β|sin(2s)|)2 (red). The black line represents constant precision (β=0). (C) In variable-precision models, we model the probability distribution over precision as a gamma distribution; an example with a mean of 0.75 and a scale parameter τ of 0.5 is shown in blue. The green line represents a delta function over precision, corresponding to fixed precision (τ=0).
Figure 3.
Figure 3.
Factor importance metrics. In each diagram, each dimension represents a binary factor and each vertex a model; we show an example with 3 factors and thus a total of 8 models. The Base model, with none of the factors, is (0, 0, 0) and the Full model, with all factors, is (1, 1, 1). (A) Knock-in difference (KID, red arrows): the AIC or BIC difference between the Base model (0, 0, 0) and the knock-in model with each single factor. (B) Knock-out difference (KOD, red arrows): the AIC or BIC difference between the corresponding knock-out model and the Full model (1, 1, 1). (C) The log factor likelihood ratio (LFLR). We compute the log likelihood ratio of a factor being present versus absent by marginalizing over all models with or without that factor, respectively.
Figure 4.
Figure 4.
Factor importance: guessing (G) and variable precision (V). Here and in other factor importance plots, dashed black lines mark the boundaries of our interpretation of the strength of the evidence (>9.2: very strong, >6.8: strong, >4.6: moderate). (A-C) Mean and s.e.m. of KID (A), KOD (B), and 2·LFLR (C) based on AIC (top) or BIC (bottom) for the factors G and V in all experiments. (D) Model fits to the proportion of reporting “right” as a function of target orientation in Experiment 1. In all model fit plots, we use error bars and shaded areas to represent ± 1 s.e.m. in the data and the model fits, respectively. The G, V, and GV models fit the data equally well, and better than the Base model. (E) Model fits in Experiment 9. The V and GV models fit the data almost equally well, and better than the Base and G models.
Figure 5.
Figure 5.
Factor importance among guessing (G), oblique effect (O), and residual variable precision (V). The red dashed box marks the major changes (compared to Figure 4) in the evidence for the importance of factor V when taking factor O into consideration. (A-C) Mean and s.e.m. of KID (A), KOD (B), and 2·LFLR (C) based on AIC (top) or BIC (bottom) for the factors G, O, V, and the OV combination, in all experiments. (D) Model fits in Experiment 9. The O, V, and OV models fit the data almost equally well, and better than the Base model.
Figure 6.
Figure 6.
Factor importance: guessing (G), oblique effect (O), decision noise (D) and the residual variable precision (V). The red dashed box marks the major changes (compared to Figure 5) in the evidence for the importance of factor V when taking factor D into consideration. (A-C) Mean and s.e.m. of KID (A), KOD (B), and 2·LFLR (C) based on AIC (top) or BIC (bottom) for the factors G, O, D, V, and the OV combination, in all experiments.
Figure 7.
Figure 7.
Model fits in Experiment 11. Proportion of reporting “target present” as a function of set size (left) and the smallest circular distance (right) in orientation space between the target and any of the distractors. Target present trials and target absent trials are shown with blue and red, respectively. The D and V models fit the data almost as well as the Full model, and better than the O model.
Figure 8.
Figure 8.
Model fits show that factor G and factor O are important in some experiments. (A) Model fits in Experiment 5 show the importance for factor G. The G model fits better than the Base model. (B) Model fits in Experiment 2 show the importance of factor O. Top: Proportion of reporting “clockwise” as a function of the orientation difference between the target and the reference, collapsed across reference orientations. Bottom: Proportion of reporting “clockwise” as a function of the reference orientation, collapsed across target orientations. The O model fits better than the Base, G, and V models.
Figure 9.
Figure 9.
Trade-offs between parameters. (A) Trade-off between precision J and guessing rate λ. We generated a synthetic data set from the G model in Experiment 4 with J=0.08 deg−2 and λ=0.02, and fitted the data with the G model. The color plot shows the log likelihood of combinations of J and λ. Many combinations have a high log likelihood, including a combination of λ=0 and a value of J lower than the true value. (B) Trade-off between the factors O (parameterized by β) and V (parameterized by τ). We generated a synthetic data set from the V model in Experiment 9, with τ=0.05 (and β=0), and fitted the data with the OV model. The color plot shows the log marginal likelihood of combinations of β and τ. Many combinations have a high log likelihood, including a combination of non-zero β and a value of τ lower than the true value.
Figure 10.
Figure 10.
Crossing the suboptimal decision rules with the factor models in Experiment 7. (A) The x-axis lists GODV family members, and the y-axis lists different decision rules from Shen & Ma, 2016. Decision rules marked in bold face are the rules similar to the Opt rule (Shen & Ma, 2016). The color of the dot represents the BIC of a hybrid model with a certain decision rule and factor model. Some combinations are missing because those models are invalid (Appendix 2). AIC version of the results is shown in Appendix Figure A12. (B) Mean and s.e.m of 2·LFLR based on AIC (left) or BIC (right) for factors G, O, D, V, and the OV combination in Experiment 7. Blue bars: only models with the optimal decision rule are included. Yellow bars: all models except for those crossed with the Sign rule and SumX rules are included; we marginalized over decision rule in the same way as we marginalized over the “missing” GODV factors.
Figure 11.
Figure 11.
Comparing the optimal with the Min rule in Experiments 9 and 10. (A) Mean and s.e.m. of the difference in AIC (top) and BIC (bottom) between each model and the Full-Opt model in Experiment 9. Blue bar: models with the optimal decision rule. Yellow bar: models with the Min decision rule. (B) Mean and s.e.m of 2·LFLR based on AIC (left) or BIC (right) for the factors G, O, D, V, and the OV combination in Experiment 9. Blue bars: only models with the optimal decision rule are included. Yellow bars: all models are included; we marginalized over decision rule (Opt/Min) in the same way as we marginalized over the “missing” GODV factors. (C-D) Same as (A-B), but for Experiment 10.
Figure 12.
Figure 12.
Comparing models with a Gaussian prior and a boxcar prior over orientation in Experiment 5. (A) Mean and s.e.m. of the difference in AIC (top) and BIC (bottom) between each model and the Full-Gaussian prior model in Experiment 5. Blue bars: models with a Gaussian prior. Yellow bar: models with a boxcar prior. (B) Mean and s.e.m of 2·LFLR based on AIC (left) or BIC (right) for the factors G, O, D, V, and the OV combination in Experiment 5. Blue bar: models with a Gaussian prior. Yellow bars: models with a boxcar prior.
Figure 13.
Figure 13.
Relationship between mean precision and set size, estimated with the Full model, in all experiments with multiple set sizes (mean · 1 s.e.m.). The effect of set size is significant in all experiments except Experiment 6.

Similar articles

Cited by

References

    1. Acerbi L, & Ma WJ (2017). Practical Bayesian Optimization for Model Fitting with Bayesian Adaptive Direct Search. In Guyon IGR, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S (Ed.), Advances in Neural Information Processing Systems 30 (pp. 1834–1844). Curran Associates, Inc; 10.1101/150052 - DOI
    1. Acerbi L, Ma WJ, & Vijayakumar S (2014). A Framework for Testing Identifiability of Bayesian Models of Perception. In Ghahramani Z, Welling M, Cortes C, Lawrence ND, & Weinberger KQ (Eds.), Advances in Neural Information Processing Systems 27 (pp. 1026–1034). Curran Associates, Inc.
    1. Acerbi L, Vijayakumar S, & Wolpert DM (2014). On the Origins of Suboptimality in Human Probabilistic Inference. PLoS Computational Biology, 10(6), e1003661 10.1371/journal.pcbi.1003661 - DOI - PMC - PubMed
    1. Adams WJ, Graf EW, & Ernst MO (2004). Experience can change the ‘light-from-above’ prior . Nature Neuroscience, 7, 1057. - PubMed
    1. Akaike H (1974). A new look at the statistical model identification. IEEE Transactionson Automatic Control, 19(6), 716–723. 10.1109/TAC.1974.1100705 - DOI

Publication types