Judging facts, judging norms: Training machine learning models to judge humans requires a modified approach to labeling data.

Journal: Science advances
Published Date:

Abstract

As governments and industry turn to increased use of automated decision systems, it becomes essential to consider how closely such systems can reproduce human judgment. We identify a core potential failure, finding that annotators label objects differently depending on whether they are being asked a factual question or a normative question. This challenges a natural assumption maintained in many standard machine-learning (ML) data acquisition procedures: that there is no difference between predicting the factual classification of an object and an exercise of judgment about whether an object violates a rule premised on those facts. We find that using factual labels to train models intended for normative judgments introduces a notable measurement error. We show that models trained using factual labels yield significantly different judgments than those trained using normative labels and that the impact of this effect on model performance can exceed that of other factors (e.g., dataset size) that routinely attract attention from ML researchers and practitioners.

Authors

  • Aparna Balagopalan
    Massachusetts Institute of Technology, Cambridge, MA, USA.
  • David Madras
    University of Toronto, Toronto, Ontario, Canada.
  • David H Yang
    University of Toronto, Toronto, Ontario, Canada.
  • Dylan Hadfield-Menell
    Massachusetts Institute of Technology, Cambridge, MA, USA.
  • Gillian K Hadfield
    University of Toronto, Toronto, Ontario, Canada.
  • Marzyeh Ghassemi
    Electrical Engineering and Computer Science, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, United States.