Weakly supervised label learning flows.

Journal: Neural networks : the official journal of the International Neural Network Society
Published Date:

Abstract

Supervised learning usually requires a large amount of labeled data. However, attaining ground-truth labels is costly for many tasks. Alternatively, weakly supervised methods learn with cheap weak signals that only approximately label some data. Many existing weakly supervised learning methods learn a deterministic function that estimates labels given the input data and weak signals. In this paper, we develop label learning flows (LLF), a general framework for weakly supervised learning problems. Our method is a generative model based on normalizing flows. The main idea of LLF is to optimize the conditional likelihoods of all possible labelings of the data within a constrained space defined by weak signals. We develop a training method for LLF that trains the conditional flow inversely and avoids estimating the labels. Once a model is trained, we can make predictions with a sampling algorithm. We apply LLF to three weakly supervised learning problems. Experiment results show that our method outperforms many baselines we compare against.

Authors

  • You Lu
    Beijing Information Science and Technology University, Beijing, China. Electronic address: luyou04@bistu.edu.cn.
  • Wenzhuo Song
    Northeast Normal University, 2555 Jingyue Street, Changchun, 130117, China. Electronic address: wzsong@nenu.edu.cn.
  • Chidubem Arachie
    Google, 1195 Borregas Drive, Sunnyvale, 94089, US. Electronic address: achid17@vt.edu.
  • Bert Huang