A Comparison of Bottom-Up Models for Spatial Saliency Predictions in Autonomous Driving

Sensors (Basel). 2021 Oct 14;21(20):6825. doi: 10.3390/s21206825.


Bottom-up saliency models identify the salient regions of an image based on features such as color, intensity and orientation. These models are typically used as predictors of human visual behavior and for computer vision tasks. In this paper, we conduct a systematic evaluation of the saliency maps computed with four selected bottom-up models on images of urban and highway traffic scenes. Saliency both over whole images and on object level is investigated and elaborated in terms of the energy and the entropy of the saliency maps. We identify significant differences with respect to the amount, size and shape-complexity of the salient areas computed by different models. Based on these findings, we analyze the likelihood that object instances fall within the salient areas of an image and investigate the agreement between the segments of traffic participants and the saliency maps of the different models. The overall and object-level analysis provides insights on the distinctive features of salient areas identified by different models, which can be used as selection criteria for prospective applications in autonomous driving such as object detection and tracking.

Keywords: autonomous driving; bottom-up saliency models; perception; saliency detection; saliency maps; visual salience.

MeSH terms

  • Algorithms*
  • Automobile Driving*
  • Humans

Grants and funding