Using enriched semantic event chains to model human action prediction based on (minimal) spatial information

PLoS One. 2020 Dec 28;15(12):e0243829. doi: 10.1371/journal.pone.0243829. eCollection 2020.

Abstract

Predicting other people's upcoming action is key to successful social interactions. Previous studies have started to disentangle the various sources of information that action observers exploit, including objects, movements, contextual cues and features regarding the acting person's identity. We here focus on the role of static and dynamic inter-object spatial relations that change during an action. We designed a virtual reality setup and tested recognition speed for ten different manipulation actions. Importantly, all objects had been abstracted by emulating them with cubes such that participants could not infer an action using object information. Instead, participants had to rely only on the limited information that comes from the changes in the spatial relations between the cubes. In spite of these constraints, participants were able to predict actions in, on average, less than 64% of the action's duration. Furthermore, we employed a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of different types of spatial relations: (a) objects' touching/untouching, (b) static spatial relations between objects and (c) dynamic spatial relations between objects during an action. Assuming the eSEC as an underlying model, we show, using information theoretical analysis, that humans mostly rely on a mixed-cue strategy when predicting actions. Machine-based action prediction is able to produce faster decisions based on individual cues. We argue that human strategy, though slower, may be particularly beneficial for prediction of natural and more complex actions with more variable or partial sources of information. Our findings contribute to the understanding of how individuals afford inferring observed actions' goals even before full goal accomplishment, and may open new avenues for building robots for conflict-free human-robot cooperation.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Computer Simulation*
  • Female
  • Human Activities*
  • Humans
  • Male
  • Models, Biological*
  • Semantics*
  • Space Perception*
  • Virtual Reality
  • Young Adult

Grants and funding

The research leading to these results has received funding from the German Research Foundation (DFG) grant WO388/13-1 and SCHU1439/8-1 as well as the European Community’s H2020 Programme (Future and Emerging Technologies, FET) under grant agreement no. 732266, Plan4Act.