Lane-change intention recognition considering oncoming traffic: Novel insights revealed by advances in deep learning

Accid Anal Prev. 2024 Apr:198:107476. doi: 10.1016/j.aap.2024.107476. Epub 2024 Feb 6.

Abstract

Lane-changing (LC) intention recognition models have seen limited real-world application due to a lack of research on two-lane two-way road environments. This study constructs a high-fidelity simulated two-lane two-way road to develop a Transformer model that accurately recognizes LC intention. We propose a novel LC labelling algorithm combining vehicle dynamics and eye-tracking (VEL) and compare it against traditional time window labelling (TWL). We find the LC recognition accuracy can be further improved when oncoming vehicle features are included in the LC dataset. The Transformer demonstrates state-of-the-art performance recognizing LC 4.59 s in advance with 92.6 % accuracy using the VEL labelling method compared to GRU, LSTM and CNN + LSTM models. To interpret the Transformer's 'black box', we apply LIME model which reveals the model focuses on eye-tracking features and LC vehicle interactions with preceding and oncoming traffic during LC events. This research demonstrates that modelling additional road users and driver gaze in LC intention recognition achieves significant improvements in model performance and time-to-collision warning capabilities on two-lane two-way roads.

Keywords: Eye-tracking features; LIME; Lane-change intention recognition; Multi-head attention mechanism; Transformer.

MeSH terms

  • Accidents, Traffic
  • Algorithms
  • Automobile Driving*
  • Deep Learning*
  • Humans
  • Intention