Irrational herding persists in human-bot interactions.

Journal: Scientific reports
Published Date:

Abstract

We explore human herding in a strategic setting where humans interact with automated entities (bots) and study the shift in the behaviour and beliefs of humans when they are aware of interacting with bots. The strategic setting is an online minority game, where 1997 participants are rewarded for following the minority strategy. This setting permits distinguishing between irrational herding and rational self-interest-a fundamental challenge in understanding herding in strategic contexts. Moreover, participants were divided into two groups: one informed of playing against bots (informed condition) and the other unaware (not-informed condition). Our findings revealed that while informed participants adjusted their beliefs about bots' behaviour, their actual decisions remained largely unaffected. In both conditions, 30% of participants followed the majority, contrary to theoretical expectations of no herding. This study underscores the persistence of herding behaviour in human decision-making, even when participants are aware of interacting with automated entities. The insights provide profound implications for understanding human behaviour on digital platforms where interactions with bots are common.

Authors

  • Luca Verginer
    Chair of Systems Design, ETH Zurich, Zurich, Switzerland. lverginer@ethz.ch.
  • Giacomo Vaccario
    Chair of Systems Design, ETH Zurich, Zurich, Switzerland.
  • Piero Ronzani
    International Security and Development Center, Berlin, Germany.