Unveiling Public Stigma for Borderline Personality Disorder: A Comparative Study of Artificial Intelligence and Mental Health Care Providers.
Journal:
Personality and mental health
PMID:
40272185
Abstract
Generative artificial intelligence (GAI) programs can identify symptoms and make recommendations for treatment for mental disorders, including borderline personality disorder (BPD). Despite GAI's potential as a clinical tool, stereotypes are inherent in their algorithms but not obvious until directly assessed. Given this concern, we assessed and compared GAIs' (ChatGPT-3.5, 4, and Google Gemini) symptom recognition and public stigma for a woman and man vignette character with BPD. The GAIs' responses were also compared to a sample of mental health care practitioners (MHCPs; nā=ā218). Compared to MHCPs, GAI showed more empathy for the characters. GAI were also less likely to view the characters' mental health symptoms as developmental stage problems and rated these symptoms as more chronic and unchangeable. The GAI also rated the vignette characters as less trustworthy and more likely to have difficulty forming close relationships than the MHCPs. Across GAI, gender biases were found with Google Gemini showing less empathy, more negative reactions, and greater public stigma, particularly for a woman with BPD, than either ChatGPT-3.5 or ChatGPT-4. A woman with BPD was also rated as having more chronic mental health problems than a man by all GAI. Overall, these results suggest that GAI may express empathy but reflects gender bias and stereotyped beliefs for people with BPD. Greater transparency and incorporation of knowledgeable MHCPs and people with lived experiences are needed in GAI training to reduce bias and enhance their accuracy prior to use in mental health applications.