BioBERT: a pre-trained biomedical language representation model for biomedical text mining.

Journal: Bioinformatics (Oxford, England)
Published Date:

Abstract

MOTIVATION: Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora.

Authors

  • Jinhyuk Lee
    Department of Computer Science and Engineering, Korea University, Seoul, 02841, Republic of Korea.
  • Wonjin Yoon
    Department of Computer Science and Engineering, Korea University, Seoul, 02841, Republic of Korea.
  • Sungdong Kim
    Clova AI Research, Naver Corp, Seong-Nam 13561, Korea.
  • Donghyeon Kim
  • Sunkyu Kim
    Department of Computer Science and Engineering, Korea University, Seoul 02841, South Korea.
  • Chan Ho So
    Interdisciplinary Graduate Program in Bioinformatics, Korea University, Seoul, 02841, Republic of Korea.
  • Jaewoo Kang
    Department of Computer Science and Engineering, Korea University, Seoul, Republic of Korea.