Delving into Generalizable Label Distribution Learning.

Journal: IEEE transactions on pattern analysis and machine intelligence
Published Date:

Abstract

Owing to the excellent capability in dealing with label ambiguity, Label Distribution Learning (LDL), as an emerging machine learning paradigm, has received extensive research in recent years. Though remarkable progress has been achieved in various tasks, one limitation with existing LDL methods is that they are all based on the i.i.d. assumption that training and test data are identically and independently distributed. As a result, they suffer obvious performance degradation and are no longer applicable when tested in out-of-distribution scenarios, which severely limits the application of LDL in many tasks. In this paper, we identify and investigate the Generalizable Label Distribution Learning (GLDL) problem. To handle such a challenging problem, we delve into the characteristics of GLDL and find that the label annotations changing with the variability of the domains is the underlying reason for the performance degradation of the existing methods. Inspired by this observation, we explore domain-invariant feature-label correlation information to reduce the impact of label annotations changing with domains and propose two practical methods. Extensive experiments verify the superior performance of the proposed methods. Our work fills the gap in benchmarks and techniques for practical GLDL problems.

Authors

  • Xingyu Zhao
    University of Science and Technology of China, Hefei, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China.
  • Lei Qi
  • Yuexuan An
  • Xin Geng
    BGI-Shenzhen, Shenzhen, 518083, China.

Keywords

No keywords available for this article.