Discriminative multi-source adaptation multi-feature co-regression for visual classification.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Learning an effective visual classifier from few labeled samples is a challenging problem, which has motivated the multi-source adaptation scheme in machine learning. While the advantages of multi-source adaptation have been widely recognized, there still exit three major limitations in extant methods. Firstly, how to effectively select the discriminative sources is yet an unresolved issue. Secondly, multiple different visual features on hand cannot be effectively exploited to represent a target object for boosting the adaptation performance. Last but not least, they mainly focus on either visual understanding or feature learning independently, which may lead to the so-called semantic gap between the low-level features and the high-level semantics. To overcome these defects, we propose a novel Multi-source Adaptation Multi-Feature (MAMF) co-regression framework by jointly exploring multi-feature co-regression, multiple latent spaces learning, and discriminative sources selection. Concretely, MAMF conducts the multi-feature representation co-regression with feature learning by simultaneously uncovering multiple optimal latent spaces and taking into account correlations among multiple feature representations. Moreover, to discriminatively leverage multi-source knowledge for each target feature representation, MAMF automatically selects the discriminative source models trained on source datasets by formulating it as a row-sparsity pursuit problem. Different from the state-of-the-arts, our method is able to adapt knowledge from multiple sources even if the features of each source and the target are partially different but overlapping. Experimental results on three challenging visual domain adaptation tasks consistently demonstrate the superiority of our method in comparison with the related state-of-the-arts.

Authors

  • Jianwen Tao
    School of Information Science and Engineering, NIT, Zhejiang University, Ningbo 315100, China. Electronic address: jianwen_tao@aliyun.com.
  • Wei Dai
    Department of Intensive Care Unit, The First Affiliated Hospital of Jiangxi Medical College, Shangrao, Jiangxi, China.