Real-Time 3D Object Detection on Crowded Pedestrians

Sensors (Basel). 2023 Oct 26;23(21):8725. doi: 10.3390/s23218725.

Abstract

In the field of autonomous driving, object detection under point clouds is indispensable for environmental perception. In order to achieve the goal of reducing blind spots in perception, many autonomous driving schemes have added low-cost blind-filling LiDAR on the side of the vehicle. Unlike point cloud target detection based on high-performance LiDAR, the blind-filling LiDARs have low vertical angular resolution and are mounted on the side of the vehicle, resulting in easily mixed point clouds of pedestrian targets in close proximity to each other. These characteristics are harmful for target detection. Currently, many research works focus on target detection under high-density LiDAR. These methods cannot effectively deal with the high sparsity of the point clouds, and the recall and detection accuracy of crowded pedestrian targets tend to be low. To overcome these problems, we propose a real-time detection model for crowded pedestrian targets, namely RTCP. To improve computational efficiency, we utilize an attention-based point sampling method to reduce the redundancy of the point clouds, then we obtain new feature tensors by the quantization of the point cloud space and neighborhood fusion in polar coordinates. In order to make it easier for the model to focus on the center position of the target, we propose an object alignment attention module (OAA) for position alignment, and we utilize an additional branch of the targets' location occupied heatmap to guide the training of the OAA module. These methods improve the model's robustness against the occlusion of crowded pedestrian targets. Finally, we evaluate the detector on KITTI, JRDB, and our own blind-filling LiDAR dataset, and our algorithm achieved the best trade-off of detection accuracy against runtime efficiency.

Keywords: attention; center alignment; heatmap; point sampling.