Spatial constraints on learning in visual search: modeling contextual cuing

J Exp Psychol Hum Percept Perform. 2007 Aug;33(4):798-815. doi: 10.1037/0096-1523.33.4.798.

Abstract

Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. The authors modeled existing results using a connectionist architecture and then designed new behavioral experiments to test the model's assumptions. The modeling and behavioral results indicate that learning may be restricted to the local context even when the entire configuration is predictive of target location. Local learning constrains how much guidance is produced by contextual cuing. The modeling and new data also demonstrate that local learning requires that the local context maintain its location in the overall global context.

MeSH terms

  • Cues*
  • Humans
  • Learning*
  • Models, Psychological
  • Reaction Time
  • Space Perception*
  • Visual Perception*