A rational model of the effects of distributional information on feature learning

Cogn Psychol. 2011 Dec;63(4):173-209. doi: 10.1016/j.cogpsych.2011.08.002. Epub 2011 Sep 20.

Abstract

Most psychological theories treat the features of objects as being fixed and immediately available to observers. However, novel objects have an infinite array of properties that could potentially be encoded as features, raising the question of how people learn which features to use in representing those objects. We focus on the effects of distributional information on feature learning, considering how a rational agent should use statistical information about the properties of objects in identifying features. Inspired by previous behavioral results on human feature learning, we present an ideal observer model based on nonparametric Bayesian statistics. This model balances the idea that objects have potentially infinitely many features with the goal of using a relatively small number of features to represent any finite set of objects. We then explore the predictions of this ideal observer model. In particular, we investigate whether people are sensitive to how parts co-vary over objects they observe. In a series of four behavioral experiments (three using visual stimuli, one using conceptual stimuli), we demonstrate that people infer different features to represent the same four objects depending on the distribution of parts over the objects they observe. Additionally in all four experiments, the features people infer have consequences for how they generalize properties to novel objects. We also show that simple models that use the raw sensory data as inputs and standard dimensionality reduction techniques (principal component analysis and independent component analysis) are insufficient to explain our results.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Adult
  • Bayes Theorem
  • Humans
  • Learning*
  • Models, Psychological*
  • Orientation
  • Psychological Theory