Discriminative distribution alignment: A unified framework for heterogeneous domain adaptation

•We design a discriminative embedding constraint for the heterogeneous domain adaptation problem, which enhances the discriminative power of the common subspace.•To the best of our knowledge, we are the first to integrate the classifier adaptation, distribution alignment, and discriminative embeddin...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition Vol. 101; p. 107165
Main Authors: Yao, Yuan, Zhang, Yu, Li, Xutao, Ye, Yunming
Format: Journal Article
Language:English
Published: Elsevier Ltd 01.05.2020
Subjects:
ISSN:0031-3203, 1873-5142
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•We design a discriminative embedding constraint for the heterogeneous domain adaptation problem, which enhances the discriminative power of the common subspace.•To the best of our knowledge, we are the first to integrate the classifier adaptation, distribution alignment, and discriminative embedding constraints into a unified framework.•Many loss (e.g., cross-entropy loss or squared loss) and projection (e.g., linear projection or non-linear projection) functions can be incorporated into the proposed Discriminative Distribution Alignment framework. Two approaches are developed by using the cross-entropy loss and the squared loss, respectively.•Extensive experimental results are reported on the tasks of categorization across domains and modalities, which demonstrate the effectiveness of the proposed Discriminative Distribution Alignment framework. Heterogeneous domain adaptation (HDA) aims to leverage knowledge from a source domain for helping learn an accurate model in a heterogeneous target domain. HDA is exceedingly challenging since the feature spaces of domains are distinct. To tackle this issue, we propose a unified learning framework called Discriminative Distribution Alignment (DDA) for deriving a domain-invariant subspace. The proposed DDA can simultaneously match the discriminative directions of domains, align the distributions across domains, and enhance the separability of data during adaptation. To achieve this, DDA trains an adaptive classifier by both reducing the distribution divergence and enlarging distances between class centroids. Based on the proposed DDA framework, we further develop two methods, by embedding the cross-entropy loss and squared loss into this framework, respectively. We conduct experiments on the tasks of categorization across domains and modalities. Experimental results clearly demonstrate that the proposed DDA outperforms several state-of-the-art models.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2019.107165