Stigmatizing Language in Large Language Models for Alcohol and Substance Use Disorders: A Multimodel Evaluation and Prompt Engineering Approach.

Journal: Journal of addiction medicine
Published Date:

Abstract

OBJECTIVES: Large language models (LLMs) are increasingly used in health care communication but can inadvertently perpetuate stigmatizing language toward individuals with alcohol and substance use disorders. Despite growing interest in LLM performance, a focused evaluation of their propensity for SL and strategies to mitigate it remains lacking.

Authors

  • Yichen Wang
    Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, 230601 Hefei, China; Key Laboratory of Opto-Electronic Information Acquisition and Manipulation of Ministry of Education, Anhui University, 230601 Hefei, China.
  • Kelly Hsu
  • Christopher Brokus
  • Yuting Huang
    Tianjin Medical University Cancer Hospital and Institute, Tianjin, China.
  • Nneka Ufere
    Harvard Medical School, Boston, MA, USA.
  • Sarah Wakeman
  • James Zou
    Department of Biomedical Data Science, Stanford University, Stanford, California.
  • Wei Zhang
    The First Affiliated Hospital of Nanchang University, Nanchang, China.

Keywords

No keywords available for this article.