AmbiBias Contrast: Enhancing debiasing networks via disentangled space from ambiguity-bias clusters.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

The goal of debiasing in classification tasks is to train models to be less sensitive to correlations between a sample's target attribution and periodically occurring contextual attributes to achieve accurate classification. A prevalent method involves applying re-weighing techniques to lower the weight of bias-aligned samples that contribute to bias, thereby focusing the training on bias-conflicting samples that deviate from the bias patterns. Our empirical analysis indicates that this approach is effective in datasets where bias-conflicting samples constitute a minority compared to bias-aligned samples, yet its effectiveness diminishes in datasets with similar proportions of both. This ineffectiveness in varied dataset compositions suggests that the traditional method cannot be practical in diverse environments as it overlooks the dynamic nature of dataset-induced biases. To address this issue, we introduce a contrastive approach named "AmbiBias Contrast", which is robust across various dataset compositions. This method accounts for "ambiguity bias"- the variable nature of bias elements across datasets, which cannot be clearly defined. Given the challenge of defining bias due to the fluctuating compositions of datasets, we designed a method of representation learning that accommodates this ambiguity. Our experiments across a range of and dataset configurations verify the robustness of our method, delivering state-of-the-art performance.

Authors

  • Suneung Kim
    Department of Artificial Intelligence, Korea University, Anam-dong, Seongbuk-gu, Seoul, 02841, Republic of Korea. Electronic address: se_kim@korea.ac.kr.
  • Seong-Whan Lee