Using Natural Language Processing to Identify Stigmatizing Language in Labor and Birth Clinical Notes.

Journal: Maternal and child health journal
Published Date:

Abstract

INTRODUCTION: Stigma and bias related to race and other minoritized statuses may underlie disparities in pregnancy and birth outcomes. One emerging method to identify bias is the study of stigmatizing language in the electronic health record. The objective of our study was to develop automated natural language processing (NLP) methods to identify two types of stigmatizing language: marginalizing language and its complement, power/privilege language, accurately and automatically in labor and birth notes.

Authors

  • Veronica Barcelona
    School of Nursing, Columbia University, 560 West 168th St, Mail Code 6, New York, NY, 10032, USA. vb2534@cumc.columbia.edu.
  • Danielle Scharp
    Columbia University School of Nursing, New York, NY.
  • Hans Moen
    Turku NLP Group, Department of Future Technologies, University of Turku, Finland.
  • Anahita Davoudi
    Biostatistics, Epidemiology & Informatics, University of Pennsylvania, Philadelphia, PA.
  • Betina R Idnay
    Department of Biomedical Informatics, Columbia University, New York, NY, USA.
  • Kenrick Cato
    School of Nursing, Columbia University, New York City, NY, USA.
  • Maxim Topaz
    Division of General Internal Medicine and Primary Care, Brigham & Women's Hospital, Harvard Medical School, Boston, MA, USA.