Using distant supervision to augment manually annotated data for relation extraction.

Journal: PloS one
Published Date:

Abstract

Significant progress has been made in applying deep learning on natural language processing tasks recently. However, deep learning models typically require a large amount of annotated training data while often only small labeled datasets are available for many natural language processing tasks in biomedical literature. Building large-size datasets for deep learning is expensive since it involves considerable human effort and usually requires domain expertise in specialized fields. In this work, we consider augmenting manually annotated data with large amounts of data using distant supervision. However, data obtained by distant supervision is often noisy, we first apply some heuristics to remove some of the incorrect annotations. Then using methods inspired from transfer learning, we show that the resulting models outperform models trained on the original manually annotated sets.

Authors

  • Peng Su
    Department of Computer and Information Science, University of Delaware, Newark, Delaware, United States of America.
  • Gang Li
    The Centre for Cyber Resilience and Trust, Deakin University, Australia.
  • Cathy Wu
    Center for Bioinformatics and Computational Biology, University of Delaware, Newark, Delaware, United States of America; Protein Information Resource, Department of Biochemistry and Molecular & Cellular Biology, Georgetown University Medical Center, Washington, D. C., United States of America.
  • K Vijay-Shanker