Progressive Training for Learning From Label Proportions.
Journal:
IEEE transactions on neural networks and learning systems
Published Date:
Jul 23, 2025
Abstract
Learning from label proportions (LLPs), which aims to learn an instance-level classifier using proportion-based grouped training data, has garnered increasing attention in the field of machine learning. Existing deep learning-based LLP methods employ end-to-end pipelines to derive proportional loss functions via the Kullback-Leibler (KL) divergence between bag-level prior and posterior class distributions. However, the optimal solutions of these methods often struggle to conform to the given proportions, inevitably leading to degradation in the final classification performance. In this article, we address this issue by proposing a novel progressive training method for LLP, termed PT-LLP, which strives to meet the proportion constraints from the bag level to the instance level. Specifically, we first train a model by using the existing KL-divergence-based LLP methods that are consistent with bag-level proportion information. Then, we impose additional constraints on strict proportion consistency to the classifier to further move toward a more ideal solution by reformulating it as a constrained optimization problem, which can be efficiently solved using optimal transport (OT) algorithms. In particular, the knowledge distillation is employed as a transition stage to transfer the bag-level information to the instance level using a teacher-student framework. Finally, our framework is model-agnostic and demonstrates significant performance improvements through extensive experiments on different datasets when incorporated into other deep LLP methods as the first training stage.
Authors
Keywords
No keywords available for this article.