Discriminative and Geometry-Aware Unsupervised Domain Adaptation

IEEE Trans Cybern. 2020 Sep;50(9):3914-3927. doi: 10.1109/TCYB.2019.2962000. Epub 2020 Jan 17.

Abstract

Domain adaptation (DA) aims to generalize a learning model across training and testing data despite the mismatch of their data distributions. In light of a theoretical estimation of the upper error bound, we argue, in this article, that an effective DA method for classification should: 1) search a shared feature subspace where the source and target data are not only aligned in terms of distributions as most state-of-the-art DA methods do but also discriminative in that instances of different classes are well separated and 2) account for the geometric structure of the underlying data manifold when inferring data labels on the target domain. In comparison with a baseline DA method which only cares about data distribution alignment between source and target, we derive three different DA models for classification, namely, close yet discriminative DA (CDDA), geometry-aware DA (GA-DA), and discriminative and GA-DA (DGA-DA), to highlight the contribution of CDDA based on 1), GA-DA based on 2), and, finally, DGA-DA implementing jointly 1) and 2). Using both the synthetic and real data, we show the effectiveness of the proposed approach which consistently outperforms the state-of-the-art DA methods over 49 image classification DA tasks through eight popular benchmarks. We further carry out an in-depth analysis of the proposed DA method in quantifying the contribution of each term of our DA model and provide insights into the proposed DA methods in visualizing both real and synthetic data.