Differential biases in human-human versus human-robot interactions.

Journal: Applied ergonomics
Published Date:

Abstract

The research on human-robot interactions indicates possible differences toward robot trust that do not exist in human-human interactions. Research on these differences has traditionally focused on performance degradations. The current study sought to explore differences in human-robot and human-human trust interactions with performance, consideration, and morality trustworthiness manipulations, which are based on ability/performance, benevolence/purpose, and integrity/process manipulations, respectively, from previous research. We used a mixed factorial hierarchical linear model design to explore the effects of trustworthiness manipulations on trustworthiness perceptions, trust intentions, and trust behaviors in a trust game. We found partner (human versus robot) differences across all three trustworthiness perceptions, indicating biases towards robots may be more expansive than previously thought. Additionally, there were marginal effects of partner differences on trust intentions. Interestingly, there were no differences between partners on trust behaviors. Results indicate human biases toward robots may be more complex than considered in the literature.

Authors

  • Gene M Alarcon
    Air Force Research Laboratory, 2210 Eighth Street Bldg. 146, Wright Patterson Air Force Base, OH, 45433, USA. Electronic address: Gene.alarcon.1@us.af.mil.
  • August Capiola
    Air Force Research Laboratory, 2210 Eighth Street Bldg. 146, Wright Patterson Air Force Base, OH, 45433, USA. Electronic address: august.capiola.1@us.af.mil.
  • Izz Aldin Hamdan
    GDIT, United States. Electronic address: Izzy.hamdan@gdit.com.
  • Michael A Lee
    GDIT, United States. Electronic address: Michael.Lee@gdit.com.
  • Sarah A Jessup
    Air Force Research Laboratory, 2210 Eighth Street Bldg. 146, Wright Patterson Air Force Base, OH, 45433, USA. Electronic address: sarah.jessup.ctr@us.af.mil.