When regularization gets it wrong: children over-simplify language input only in production

J Child Lang. 2018 Sep;45(5):1054-1072. doi: 10.1017/S0305000918000041. Epub 2018 Feb 21.

Abstract

Children tend to regularize their productions when exposed to artificial languages, an advantageous response to unpredictable variation. But generalizations in natural languages are typically conditioned by factors that children ultimately learn. In two experiments, adult and six-year-old learners witnessed two novel classifiers, probabilistically conditioned by semantics. Whereas adults displayed high accuracy in their productions - applying the semantic criteria to familiar and novel items - children were oblivious to the semantic conditioning. Instead, children regularized their productions, over-relying on only one classifier. However, in a two-alternative forced-choice task, children's performance revealed greater respect for the system's complexity: they selected both classifiers equally, without bias toward one or the other, and displayed better accuracy on familiar items. Given that natural languages are conditioned by multiple factors that children successfully learn, we suggest that their tendency to simplify in production stems from retrieval difficulty when a complex system has not yet been fully learned.

Keywords: generalization; language acquisition; probability boosting.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Child
  • Child Language*
  • Child, Preschool
  • Female
  • Humans
  • Language
  • Language Development*
  • Learning*
  • Male
  • Semantics*