A Survey on Confidence Calibration of Deep Learning-Based Classification Models Under Class Imbalance Data.
Journal:
IEEE transactions on neural networks and learning systems
Published Date:
Jun 18, 2025
Abstract
Confidence calibration in classification models is a vital technique for accurately estimating the posterior probabilities of predicted results, which is crucial for assessing the likelihood of correct decisions in real-world applications. Class imbalance data, which biases the model's learning and subsequently skews predicted posterior probabilities, makes confidence calibration more challenging. Especially for underrepresented classes, which are often more important and tend to have higher uncertainty, confidence calibration is more complex and essential. Unlike previous surveys that typically separately investigate confidence calibration or class imbalance, this article comprehensively investigates confidence calibration methods for deep learning-based classification models under class imbalance. First, the problem of confidence calibration under class imbalance data is outlined. Second, this article explores the impact of class imbalance data on confidence calibration in theory, providing some explanations for empirical findings in existing studies. Third, this article reviews 60 state-of-the-art confidence calibration methods under class imbalance data, divides these methods into six groups according to method differences, and systematically compares seven properties to evaluate their superiority. Then, some commonly used and emerging evaluation methodology are summarized, including public datasets and evaluation metrics. Subsequently, this article performs necessary comparative experiments to provide better guidelines and insights to the readership. Finally, we discuss several application fields and promising research directions that serve as a guideline for future studies.
Authors
Keywords
No keywords available for this article.