Biased echoes: Large language models reinforce investment biases and increase portfolio risks of private investors.

Journal: PloS one
Published Date:

Abstract

Large language models are increasingly used by private investors seeking financial advice. The current paper examines the potential of these models to perpetuate investment biases and affect the economic security of individuals at scale. We provide a systematic assessment of how large language models used for investment advice shape the portfolio risks of private investors. We offer a comprehensive model of large language model investment advice risk, examining five key dimensions of portfolio risks (geographical cluster risk, sector cluster risk, trend chasing risk, active investment allocation risk, and total expense risk). We demonstrate across four studies that large language models used for investment advice induce increased portfolio risks across all five risk dimensions, and that a range of debiasing interventions only partially mitigate these risks. Our findings show that large language models exhibit similar "cognitive" biases as human investors, reinforcing existing investment biases inherent in their training data. These findings have important implications for private investors, policymakers, artificial intelligence developers, financial institutions, and the responsible development of large language models in the financial sector.

Authors

  • Philipp Winder
    Institute of Behavioral Science & Technology, University of St. Gallen, St. Gallen, Switzerland.
  • Christian Hildebrand
    Institute of Behavioral Science & Technology, University of St. Gallen, St. Gallen, Switzerland.
  • Jochen Hartmann
    TUM School of Management, Technical University of Munich, Munich, Bavaria, Germany.