People expect artificial moral advisors to be more utilitarian and distrust utilitarian moral advisors.

Journal: Cognition
PMID:

Abstract

As machines powered by artificial intelligence increase in their technological capacities, there is a growing interest in the theoretical and practical idea of artificial moral advisors (AMAs): systems powered by artificial intelligence that are explicitly designed to assist humans in making ethical decisions. Across four pre-registered studies (total N = 2604) we investigated how people perceive and trust artificial moral advisors compared to human advisors. Extending previous work on algorithmic aversion, we show that people have a significant aversion to AMAs (vs humans) giving moral advice, while also showing that this is particularly the case when advisors - human and AI alike - gave advice based on utilitarian principles. We find that participants expect AI to make utilitarian decisions, and that even when participants agreed with a decision made by an AMA, they still expected to disagree with an AMA more than a human in future. Our findings suggest challenges in the adoption of artificial moral advisors, and particularly those who draw on and endorse utilitarian principles - however normatively justifiable.

Authors

  • Simon Myers
    Behavioural Science Group, Warwick Business School, University of Warwick, Scarman Rd, Coventry CV4 7AL, UK; School of Psychology, University of Kent, Canterbury, Kent, CT2 7NP, UK.
  • Jim A C Everett
    School of Psychology, University of Kent, Canterbury, Kent, CT2 7NP, UK. Electronic address: j.a.c.everett@kent.ac.uk.