Delving into Generalizable Label Distribution Learning.
Journal:
IEEE transactions on pattern analysis and machine intelligence
Published Date:
Jun 19, 2025
Abstract
Owing to the excellent capability in dealing with label ambiguity, Label Distribution Learning (LDL), as an emerging machine learning paradigm, has received extensive research in recent years. Though remarkable progress has been achieved in various tasks, one limitation with existing LDL methods is that they are all based on the i.i.d. assumption that training and test data are identically and independently distributed. As a result, they suffer obvious performance degradation and are no longer applicable when tested in out-of-distribution scenarios, which severely limits the application of LDL in many tasks. In this paper, we identify and investigate the Generalizable Label Distribution Learning (GLDL) problem. To handle such a challenging problem, we delve into the characteristics of GLDL and find that the label annotations changing with the variability of the domains is the underlying reason for the performance degradation of the existing methods. Inspired by this observation, we explore domain-invariant feature-label correlation information to reduce the impact of label annotations changing with domains and propose two practical methods. Extensive experiments verify the superior performance of the proposed methods. Our work fills the gap in benchmarks and techniques for practical GLDL problems.
Authors
Keywords
No keywords available for this article.