Language-processing accounts are beginning to accommodate different visual context effects, but they remain underspecified regarding differences between cues, both during sentence comprehension and subsequent recall. We monitored participants' eye movements to mentioned characters while they listened to transitive sentences. We varied whether speaker gaze, a depicted action, neither, or both of these visual cues were available, as well as whether both cues were deictic (Experiment 1) or only speaker gaze (Experiment 2). Speaker gaze affected eye movements during comprehension similarly early to a single deictic action depiction, but significantly earlier than non-deictic action depictions; conversely, depicted actions but not speaker gaze positively affected later recall of sentence content. Thus, cue type and cue-language relations must be accommodated in characterising real-time situated language comprehension and subsequent recall of sentence content.
Keywords: Anticipatory eye movements; Gaze cueing; Situated language processing; Spoken sentence comprehension; Visual context; Visual-world paradigm.
Copyright © 2018 Elsevier B.V. All rights reserved.