Identifying Falls Risk Screenings Not Documented with Administrative Codes Using Natural Language Processing.

Journal: AMIA ... Annual Symposium proceedings. AMIA Symposium
Published Date:

Abstract

Quality reporting that relies on coded administrative data alone may not completely and accurately depict providers' performance. To assess this concern with a test case, we developed and evaluated a natural language processing (NLP) approach to identify falls risk screenings documented in clinical notes of patients without coded falls risk screening data. Extracting information from 1,558 clinical notes (mainly progress notes) from 144 eligible patients, we generated a lexicon of 38 keywords relevant to falls risk screening, 26 terms for pre-negation, and 35 terms for post-negation. The NLP algorithm identified 62 (out of the 144) patients who falls risk screening documented only in clinical notes and not coded. Manual review confirmed 59 patients as true positives and 77 patients as true negatives. Our NLP approach scored 0.92 for precision, 0.95 for recall, and 0.93 for F-measure. These results support the concept of utilizing NLP to enhance healthcare quality reporting.

Authors

  • Vivienne J Zhu
    Biomedical Informatics Center at Medical University of South Carolina, Charleston, South Carolina.
  • Tina D Walker
    Information Solutions at Medical University of South Carolina, Charleston, South Carolina.
  • Robert W Warren
    Information Solutions at Medical University of South Carolina, Charleston, South Carolina.
  • Peggy B Jenny
    Information Solutions at Medical University of South Carolina, Charleston, South Carolina.
  • Stephane Meystre
    Stephane Meystre, MD, PhD, is an Assistant Professor at the University of Utah and a Research Investigator in the IDEAS Center at the VA Salt Lake City Health Care System in Salt Lake City, UT.
  • Leslie A Lenert
    Biomedical Informatics Center, Medical University of South Carolina, Charleston, SC 29425, United States.