Detection of suicidality from medical text using privacy-preserving large language models.

Journal: The British journal of psychiatry : the journal of mental science
Published Date:

Abstract

BACKGROUND: Attempts to use artificial intelligence (AI) in psychiatric disorders show moderate success, highlighting the potential of incorporating information from clinical assessments to improve the models. This study focuses on using large language models (LLMs) to detect suicide risk from medical text in psychiatric care.

Authors

  • Isabella Catharina Wiest
    Else Kröner Fresenius Center for Digital Health, TUD Dresden University of Technology, Dresden, Germany; Department of Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.
  • Falk Gerrik Verhees
    Department of Psychiatry and Psychotherapy, Carl Gustav Carus University Hospital, Technical University Dresden, Dresden, Germany.
  • Dyke Ferber
    Department of Medical Oncology and Internal Medicine VI, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany.
  • Jiefu Zhu
    Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany.
  • Michael Bauer
    Department of Psychiatry and Psychotherapy, University Hospital Carl Gustav Carus Medical Faculty, Technische Universität Dresden, Dresden, Germany.
  • Ute Lewitzka
    Department of Psychiatry and Psychotherapy, Carl Gustav Carus University Hospital, Technical University Dresden, Dresden, Germany.
  • Andrea Pfennig
    Department of Psychiatry and Psychotherapy, Carl Gustav Carus University Hospital, Technical University Dresden, Dresden, Germany.
  • Pavol Mikolas
    Department of Psychiatry and Psychotherapy, Otto von Guericke University, Leipziger Str. 44, 39120, Magdeburg, Germany.
  • Jakob Nikolas Kather
    Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany.