Fostering effective hybrid human-LLM reasoning and decision making.

Journal: Frontiers in artificial intelligence
Published Date:

Abstract

The impressive performance of modern Large Language Models (LLMs) across a wide range of tasks, along with their often non-trivial errors, has garnered unprecedented attention regarding the potential of AI and its impact on everyday life. While considerable effort has been and continues to be dedicated to overcoming the limitations of current models, the potentials and risks of human-LLM collaboration remain largely underexplored. In this perspective, we argue that enhancing the focus on human-LLM interaction should be a primary target for future LLM research. Specifically, we will briefly examine some of the biases that may hinder effective collaboration between humans and machines, explore potential solutions, and discuss two broader goals-mutual understanding and complementary team performance-that, in our view, future research should address to enhance effective human-LLM reasoning and decision-making.

Authors

  • Andrea Passerini
    Department of Information Engineering and Computer Science, University of Trento, Trento, Italy.
  • Aryo Gema
    School of Informatics, University of Edinburgh, Edinburgh, United Kingdom.
  • Pasquale Minervini
    School of Informatics, University of Edinburgh, Edinburgh, United Kingdom.
  • Burcu Sayin
    Department of Information Engineering and Computer Science, University of Trento, Trento, Italy.
  • Katya Tentori
    Center for Mind/Brain Sciences, University of Trento, Trento, Italy.

Keywords

No keywords available for this article.