Enhancing medical text classification with GAN-based data augmentation and multi-task learning in BERT.

Journal: Scientific reports
PMID:

Abstract

With the rapid advancement of medical informatics, the accumulation of electronic medical records and clinical diagnostic data provides unprecedented opportunities for intelligent medical text classification. However, challenges such as class imbalance, semantic heterogeneity, and data sparsity limit the effectiveness of traditional classification models. In this study, we propose an enhanced medical text classification framework by integrating a self-attentive adversarial augmentation network (SAAN) for data augmentation and a disease-aware multi-task BERT (DMT-BERT) strategy. The proposed SAAN incorporates adversarial self-attention, improving the generation of high-quality minority class samples while mitigating noise. Furthermore, DMT-BERT simultaneously learns medical text representations and disease co-occurrence relationships, enhancing feature extraction from rare symptoms. Extensive experiments on the private clinical datasets and the public CCKS 2017 dataset demonstrate that our approach significantly outperforms baseline models, achieving the highest F1-score and ROC-AUC values. The proposed innovations address key limitations in medical text classification and contribute to the development of robust clinical decision-support systems.

Authors

  • Xinping Chen
    Interdisciplinary Research Center for Agriculture Green Development in Yangtze River Basin, College of Resources and Environment, Southwest University, Tiansheng Road 02, Chongqing, 400715, China. Electronic address: chenxp2017@swu.edu.cn.
  • Yan Du
    State Key Laboratory of Electroanalytical Chemistry, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun, Jilin 130022, China; School of Applied Chemistry and Engineering, University of Science and Technology of China, Hefei, Anhui 230026, China. Electronic address: duyan@ciac.ac.cn.