How humans combine simultaneous proprioceptive and visual position information

Exp Brain Res. 1996 Sep;111(2):253-61. doi: 10.1007/BF00227302.

Abstract

To enable us to study how humans combine simultaneously present visual and proprioceptive position information, we had subjects perform a matching task. Seated at a table, they placed their left hand under the table concealing it from their gaze. They then had to match the proprioceptively perceived position of the left hand using only proprioceptive, only visual or both proprioceptive and visual information. We analysed the variance of the indicated positions in the various conditions. We compared the results with the predictions of a model in which simultaneously present visual and proprioceptive position information about the same object is integrated in the most effective way. The results are in disagreement with the model: the variance of the condition with both visual and proprioceptive information is smaller than expected from the variances of the other conditions. This means that the available information was integrated in a highly effective way. Furthermore, the results suggest that additional information was used. This information might have been visual information about body parts other than the fingertip or it might have been visual information about the environment.

MeSH terms

  • Adult
  • Female
  • Humans
  • Male
  • Models, Neurological
  • Probability
  • Proprioception / physiology*
  • Psychomotor Performance / physiology*
  • Visual Perception / physiology*