Machine Learning (ML) methods have become state of the art in radar signal processing, particularly for classification tasks (e.g., of different human activities). Radar classification can be tedious to implement, though, due to the limited size and diversity of the source dataset, i.e., the data measured once for initial training of the Machine Learning algorithms. In this work, we introduce the algorithm Radar Activity Classification with Perceptual Image Transformation (RACPIT), which increases the accuracy of human activity classification while lowering the dependency on limited source data. In doing so, we focus on the augmentation of the dataset by synthetic data. We use a human radar reflection model based on the captured motion of the test subjects performing activities in the source dataset, which we recorded with a video camera. As the synthetic data generated by this model still deviates too much from the original radar data, we implement an image transformation network to bring real data close to their synthetic counterpart. We leverage these artificially generated data to train a Convolutional Neural Network for activity classification. We found that by using our approach, the classification accuracy could be increased by up to 20%, without the need of collecting more real data.
Keywords: deep learning; domain shift; human activity classification; image transformation; machine learning; radar.