People's judgments of humans and robots in a classic moral dilemma.

Journal: Cognition
Published Date:

Abstract

How do ordinary people evaluate robots that make morally significant decisions? Previous work has found both equal and different evaluations, and different ones in either direction. In 13 studies (N = 7670), we asked people to evaluate humans and robots that make decisions in norm conflicts (variants of the classic trolley dilemma). We examined several conditions that may influence whether moral evaluations of human and robot agents are the same or different: the type of moral judgment (norms vs. blame); the structure of the dilemma (side effect vs. means-end); salience of particular information (victim, outcome); culture (Japan vs. US); and encouraged empathy. Norms for humans and robots are broadly similar, but blame judgments show a robust asymmetry under one condition: Humans are blamed less than robots specifically for inaction decisions-here, refraining from sacrificing one person for the good of many. This asymmetry may emerge because people appreciate that the human faces an impossible decision and deserves mitigated blame for inaction; when evaluating a robot, such appreciation appears to be lacking. However, our evidence for this explanation is mixed. We discuss alternative explanations and offer methodological guidance for future work into people's moral judgment of robots and humans.

Authors

  • Bertram F Malle
    Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, United States of America.
  • Matthias Scheutz
    Human-Robot Interaction Laboratory, Department of Computer Science, Tufts University , Medford, Massachusetts.
  • Corey Cusimano
    Yale University, New Haven, CT 06520, USA.
  • John Voiklis
    Knology, Inc., New York, NY 10005, USA.
  • Takanori Komatsu
    Meiji University, Chiyoda City, Tokyo 101-8301, Japan.
  • Stuti Thapa
    University of Tulsa, Tulsa, OK 74104, USA.
  • Salomi Aladia
    New York University, New York, NY 10012, USA.