The influence of mental state attributions on trust in large language models.

Journal: Communications psychology
Published Date:

Abstract

Rapid advances in artificial intelligence (AI) have led users to believe that systems such as large language models (LLMs) have mental states, including the capacity for 'experience' (e.g., emotions and consciousness). These folk-psychological attributions often diverge from expert opinion and are distinct from attributions of 'intelligence' (e.g., reasoning, planning), and yet may affect trust in AI systems. While past work provides some support for a link between anthropomorphism and trust, the impact of attributions of consciousness and other aspects of mentality on user trust remains unclear. We explored this in a preregistered experiment (N = 410) in which participants rated the capacity of an LLM to exhibit consciousness and a variety of other mental states. They then completed a decision-making task where they could revise their choices based on the advice of an LLM. Bayesian analyses revealed strong evidence against a positive correlation between attributions of consciousness and advice-taking; indeed, a dimension of mental states related to experience showed a negative relationship with advice-taking, while attributions of intelligence were strongly correlated with advice acceptance. These findings highlight how users' attitudes and behaviours are shaped by sophisticated intuitions about the capacities of LLMs-with different aspects of mental state attribution predicting people's trust in these systems.

Authors

  • Clara Colombatto
    Department of Psychology, University of Waterloo, Waterloo, ON, Canada. clara.colombatto@uwaterloo.ca.
  • Jonathan Birch
    Centre for Philosophy of Natural and Social Science, London School of Economics and Political Science, London, UK. j.birch2@lse.ac.uk.
  • Stephen M Fleming
    Max Planck University College London Centre for Computational Psychiatry and Ageing Research, University College London, London, United Kingdom.

Keywords

No keywords available for this article.