Incomplete-Data Oriented Multiview Dimension Reduction via Sparse Low-Rank Representation

IEEE Trans Neural Netw Learn Syst. 2018 Dec;29(12):6276-6291. doi: 10.1109/TNNLS.2018.2828699. Epub 2018 May 17.

Abstract

For dimension reduction on multiview data, most of the previous studies implicitly take an assumption that all samples are completed in all views. Nevertheless, this assumption could often be violated in real applications due to the presence of noise, limited access to data, equipment malfunction, and so on. Most of the previous methods will cease to work when missing values in one or multiple views occur, thus an incomplete-data oriented dimension reduction becomes an important issue. To this end, we mathematically formulate the above-mentioned issue as sparse low-rank representation through multiview subspace (SRRS) learning to impute missing values, by jointly measuring intraview relations (via sparse low-rank representation) and interview relations (through common subspace representation). Moreover, by exploiting various subspace priors in the proposed SRRS formulation, we develop three novel dimension reduction methods for incomplete multiview data: 1) multiview subspace learning via graph embedding; 2) multiview subspace learning via structured sparsity; and 3) sparse multiview feature selection via rank minimization. For each of them, the objective function and the algorithm to solve the resulting optimization problem are elaborated, respectively. We perform extensive experiments to investigate their performance on three types of tasks including data recovery, clustering, and classification. Both two toy examples (i.e., Swiss roll and -curve) and four real-world data sets (i.e., face images, multisource news, multicamera activity, and multimodality neuroimaging data) are systematically tested. As demonstrated, our methods achieve the performance superior to that of the state-of-the-art comparable methods. Also, the results clearly show the advantage of integrating the sparsity and low-rankness over using each of them separately.

Publication types

  • Research Support, Non-U.S. Gov't