Automatic detection of actionable radiology reports using bidirectional encoder representations from transformers.

Journal: BMC medical informatics and decision making
Published Date:

Abstract

BACKGROUND: It is essential for radiologists to communicate actionable findings to the referring clinicians reliably. Natural language processing (NLP) has been shown to help identify free-text radiology reports including actionable findings. However, the application of recent deep learning techniques to radiology reports, which can improve the detection performance, has not been thoroughly examined. Moreover, free-text that clinicians input in the ordering form (order information) has seldom been used to identify actionable reports. This study aims to evaluate the benefits of two new approaches: (1) bidirectional encoder representations from transformers (BERT), a recent deep learning architecture in NLP, and (2) using order information in addition to radiology reports.

Authors

  • Yuta Nakamura
    Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan. yutanakamura-tky@umin.ac.jp.
  • Shouhei Hanaoka
    Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan.
  • Yukihiro Nomura
    The University of Tokyo Hospital.
  • Takahiro Nakao
    Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan. tanakao-tky@umin.ac.jp.
  • Soichiro Miki
    The University of Tokyo Hospital.
  • Takeyuki Watadani
    Department of Radiology, Faculty of Medicine, The University of Tokyo.
  • Takeharu Yoshikawa
    The University of Tokyo Hospital.
  • Naoto Hayashi
    The University of Tokyo Hospital.
  • Osamu Abe
    From the Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan 113-8655.