Learning Robust Discriminant Subspace Based on Joint L₂,ₚ- and L₂,ₛ-Norm Distance Metrics

IEEE Trans Neural Netw Learn Syst. 2022 Jan;33(1):130-144. doi: 10.1109/TNNLS.2020.3027588. Epub 2022 Jan 5.

Abstract

Recently, there are many works on discriminant analysis, which promote the robustness of models against outliers by using L1- or L2,1-norm as the distance metric. However, both of their robustness and discriminant power are limited. In this article, we present a new robust discriminant subspace (RDS) learning method for feature extraction, with an objective function formulated in a different form. To guarantee the subspace to be robust and discriminative, we measure the within-class distances based on [Formula: see text]-norm and use [Formula: see text]-norm to measure the between-class distances. This also makes our method include rotational invariance. Since the proposed model involves both [Formula: see text]-norm maximization and [Formula: see text]-norm minimization, it is very challenging to solve. To address this problem, we present an efficient nongreedy iterative algorithm. Besides, motivated by trace ratio criterion, a mechanism of automatically balancing the contributions of different terms in our objective is found. RDS is very flexible, as it can be extended to other existing feature extraction techniques. An in-depth theoretical analysis of the algorithm's convergence is presented in this article. Experiments are conducted on several typical databases for image classification, and the promising results indicate the effectiveness of RDS.