Large language models (LLMs) show intriguing emergent behaviors, yet they receive around four or five orders of magnitude more language data than human children. What accounts for this vast difference in sample efficiency? Candidate explanations include children's pre-existing conceptual knowledge, their use of multimodal grounding, and the interactive, social nature of their input.
Keywords: artificial intelligence; human learning; language learning; large language models.
Copyright © 2023 Elsevier Ltd. All rights reserved.